Skip to main content
  • Regular article
  • Open access
  • Published:

Does United Kingdom parliamentary attention follow social media posts?

Abstract

News and social media play an important role in public political discourse. It is not clear what quantifiable relationships public discussions of politics have with official discourse within legislative bodies. In this study we present an analysis of how language used by Members of Parliament (MPs) in the United Kingdom (UK) changes after social media posts and online reactions to those posts. We consider three domains: news articles posted on Facebook in the UK, speeches in the questions-debates in the UK House of Commons, and Tweets by UK MPs. Our method works by quantifying how the words used in one domain become more common in another domain after an event such as a social media post. Our results show that words used in one domain later appear more commonly in other domains. For instance after each article on Facebook, we estimate that on average 4 in 100,000 words in Commons speeches had changed, becoming more similar to the language in the article. We also find that the extent of this language change positively correlates with the number of comments and emotional interactions on Facebook. The observed language change differs between political parties; in particular, changes in word use by Labour MPs are more strongly related to social media content than that of Conservative MPs. We argue that the magnitude of this word flow is quite substantial given the large volume of news articles shared on Facebook. Our method and results quantify how parliamentary attention follows public interest as expressed on Facebook and also indicate how this effect may be stronger for posts which evoke reactions on Facebook associated with laughter or anger.

1 Introduction

The relationship between the public, the news media, and legislative discourse has been transformed by widespread adoption of social media in the last 15 years. Many people now consume news articles on social media provided by algorithms that choose which articles to display partly based on their popularity. Furthermore, many politicians use social media themselves to get their news and to interact with the public. However, it’s unclear to what extent public reactions to news articles guide or predict the attention of legislatures to the content of those articles.

In this paper, we aim to measure how changes in UK Parliament discourse may be related to public social media activity. In particular, we measure how statements by MPs change after the sharing of a news article on Facebook. To do this, we focus on analyzing the transcripts of special sessions of the UK Parliament in which politicians ask questions of the government to hold the government to account on policy issues. The transcripts of these question sessions offer us a way to track the attention of Parliament as a whole, and allow us to examine the relationship between parliamentary attention, and social media posts and news articles.

Understanding how parliament and social media discourse interact may be considered in context of broader literature on agenda setting. Most prior works on social media and agenda setting look primarily at intermedia agenda setting, i.e. the extent to which different kinds of media (e.g. Twitter, print news media, TV news) influence one-another [1, 2]. There is evidence that politicians’ use of Twitter can influence news media [3] and other online Twitter users [4]. Conversely, the impact that the public may have on politicians via social media is still a relatively under-studied area [5]. Work has found petitions shared online may influence parliament [6] and the public may respond to politicians’ online Twitter behavior [4]. This paper aims to contribute to this under-studied area by assessing the degree to which public online social platforms may act as agenda-setters for official legislative discourse.

Another large class of related literature studies how and why politicians use social media platforms like Twitter. In general these studies find a pattern of most politicians using Twitter as a means to reach journalists first and foremost, and the public only secondarily [7, 8]. Many politicians also use social media to publicly signal loyalty to the most visible party standard-bearers who align with their policy agendas [9], as this is an effective shorthand for communicating their political stance.

A prior work which is a particularly relevant to this study is Barberá et al. [4]. This work looked at agenda-setting between the public and members of the US Congress in their interactions on Twitter, and like this work it investigates the direction and magnitude of information flow between these two groups. However, several studies have found that politicians tend to use Twitter to speak to other politicians and political journalists, rather than interacting with the general public [7, 8]. While the prior work did find that the public may lead politicians’ Twitter statements in some cases, it is important to extend this work to look beyond Twitter. In particular, by looking at the official legislative speech of politicians, we may get a better understanding of how the public impacts legislative behavior. Moreover, tracking public attention to content via Facebook data may be more representative than using Twitter data, because Facebook has a larger and less biased user base.

Given this background, we identified four Research Questions (RQs) to study in more detail:

RQ1:

Does discourse in the UK parliament question sessions follow or precede content posted publicly on Facebook?

RQ2:

Do metrics of public engagement with articles on Facebook predict parliamentary mentions of the content of those articles?

RQ3:

Does the content of MP Twitter accounts follow or precede content publicly posted on Facebook? In this respect, do their Tweets differ from their Commons speeches?

RQ4:

Regarding the previous three research questions, are there any differences between Labour and Conservative MPs?

To study these research questions we looked for methods that investigate social information transfer between different domains (e.g., between Facebook and Parliament) or groups of people (e.g., news media writers and Members of Parliament). The prior literature contains three broad classes of methods. The first method looks at directional interaction events (e.g., Retweets) in an online social network, and to use these as a proxy for information flow between users and groups [10]. The second class of methods tracks the occurrences of defined topics over time, which are treated as time series and analysed using Granger Causality [4, 11] or Transfer Entropy [12, 13] to infer directional transfer between the domains. A third approach [14], which we dub “Content Flow Analysis”, is more general and does not use topics. It simply looks to see how words are transferred between domains over time.

In this study, we are unable to observe direct responses to content from one domain to another (e.g., an MP reading a Facebook post) which we could use to track flow of information. Moreover, the use of discrete topics was too narrow for our more general research question. We therefore choose to use the content flow analysis method, which works around these limitations. In particular, we examine how words used by UK MPs become more or less similar to content of news articles after the articles are shared on Facebook. We interpret this change in similarity to articles posted online as a change in attention toward (or away from) the content discussed in the articles. As well as tracking content flow between Facebook and parliament, we are also able to track content flow between MPs Twitter posts to and from both Facebook and parliament. We are able to measure public interactions with the Facebook and Twitter posts. Interactions are actions someone has performed on Facebook or Twitter, include Retweets, Likes, Clicks, Views, and Comments, among others. We are then able to examine the extent to which changes in word use are related to public interactions to articles.

2 Methods

2.1 Datasets

We use three main datasets in this work: 1. records of statements in the UK House of Commons, 2. tweets of UK Members of Parliament, and 3. a dataset about news links shared on Facebook and the interactions with those links (e.g., number of shares, likes, etc).

2.1.1 Hansard data

Hansard is the official report of all parliamentary debates. Since 1909, speeches in the UK parliament have been transcribed and archived. We downloaded the online archive for 2016–2020 from hansard.parliament.uk during January 2021. These comprized of html files which were parsed to extract speeches. We show a summary of the number of speeches from each party in our dataset in Fig. 1. In this work, we focus on Questions Sessions in which MPs ask the government questions about current policy issues.

Figure 1
figure 1

Top 7 parties by number of Commons Speeches in our dataset. Abbreviations denote: Con = Conservative, Lab = Labor, SNP = Scottish National Party, LD = Liberal Democrats, DUP = Democratic Unionist Party, Ind = Independent, PC = Plaid Cymru

2.1.2 Tweets from parliamentarians

We gratefully received a dataset of Tweet IDs sent by UK members of parliament from the Twitter parliamentarian database [15]. The IDs cover Tweets between May 21, 2017 until December 24, 2020. The Tweets were rehydrated using the twarc2 library during July 2021. We show a summary of the number of tweets from each party in our dataset in Fig. 2.

Figure 2
figure 2

Top 7 parties by number of Tweets in our dataset. Note that Labour is overrepresented in online discourse, and Conservatives are underrepresented. Abbreviations denote: Con = Conservative, Lab = Labour, SNP = Scottish National Party, LD = Liberal Democrats, DUP = Democratic Unionist Party, PC = Plaid Cymru

2.1.3 URL shares on Facebook

Access to the Condor dataset [16] was coordinated by the Social Science One organization [17] and provided by Facebook. The data we used from the Condor dataset consists of 284,861 URLs that were publicly shared by at least 100 users on the Facebook platform between 1 January, 2017 and 1 August, 2019, and were tagged by Facebook as having been most-shared in the UK. The numbers of shares, clicks, likes and other interactions are aggregated for each URL on a monthly basis, with Gaussian noise added to protect privacy.

To characterize the content of this dataset, we look at which website domains were commonly shared on Facebook during our period of interest. The top 10 are plotted in Fig. 3. We find that these sources account for the vast majority of URLs shared in our data. These sources are all mainstream UK-based news media. The quantity of other websites, including those that are known to share misinformation, was negligible in our data set.

Figure 3
figure 3

Top 10 domains shared on Facebook in our dataset. Note that these domains are overwhelmingly news media from mainstream UK media sources

Finally, a key feature to note about this dataset is that large amounts of Gaussian noise have been added to the reaction counts for the purposes of differential privacy [16]. This means that the results in our paper that stem from the reaction counts are necessarily less precise and log-transformed reaction counts display heteroscedasticity. This reduces the statistical power of our regression results, and we believe that many otherwise-significant results are rendered insignificant because of this loss of statistical power.

2.2 Measuring changes in text content

The main goal of our analysis is to quantify the extent to which discourse in one domain (e.g., commons speeches) changes after a new statement (the reference text) is made in a different domain (e.g., articles on Facebook). To do this, we take the approach of measuring changes in word usage frequency, excluding stop words and Twitter handles. For instance, if the reference text is a news article shared on social media, our method will indicate the extent to which words used by the article are more or less likely to be used in Commons after the news article is published.

The method is illustrated in Fig. 4. We focus on reference texts that are news articles published on Facebook, tweets from MPs, or statements made by MPs in Commons. For each reference text in a stimulus domain, we measure changes in other texts in responding domains. To do this, we select texts in the responding domain to create a before corpus and an after corpus, split around the publication time of the reference text. In simple terms, our method seeks to quantify the extent to which texts in the after corpus are more similar to the reference texts, compared to texts in the before corpus.

Figure 4
figure 4

Diagram showing our method with a specific example (panel A) of how an Article shared on Facebook might influence Commons Speeches. Note that the news article contains the word Wombles, which then appears in the Commons speeches after the article is shared on Facebook. Panel B shows a more general method which is used to measure content flow between domains

For every text in the responding domain, we compute a similarity measure to the reference text based on the number of words they have in common. The main measure of similarity between texts that we use is the Bray Curtis Similarity [18]. We also applied cosine similarity and found the results are consistent with the Bray Curtis analysis. We ultimately chose Bray–Curtis because it yields more easily interpretable magnitudes. The Bray–Curtis similarity is defined as

$$\begin{aligned} S_{ij} = \frac{2C_{ij}}{n_{i} + n_{j}}, \end{aligned}$$

where \(n_{i}\) is the number of words in text i; \(w_{k,i}\) is the number of times word k occurs in text i; and \(C_{ij}\) is the number of words shared in both texts, i.e., \(C_{ij} = \sum_{k} {\min (w_{k,i}, w_{k,j})}\).

From these similarity measures on each responding domain text, we then compute a mean before similarity from the before corpus and a mean after similarity from the after corpus. We call the difference between the before and after similarity values the content flow with respect to the reference text. For reference text i and sets of texts A and B, where B is the before corpus and A is the after corpus (see Fig. 4), we define the content flow from the reference text i as the difference in mean before similarity and after similarity:

$$\begin{aligned} R(i) = \biggl(\frac{1}{ \vert A \vert }\sum_{j \in A} S_{ij} \biggr) - \biggl( \frac{1}{ \vert B \vert }\sum _{j \in B} S_{ij} \biggr), \end{aligned}$$

where \(|A|\) denotes the number of texts in corpus A.

Finally, we compute the mean content flow over the set of all possible reference texts in the stimulus domain. We define the overall content flow of texts in domain \(D_{1}\) with respect to reference texts in domain \(D_{2}\) as the mean of content flows for each text:

$$\begin{aligned} R(D_{1},D_{2}) = \frac{1}{ \vert D_{1} \vert }\sum _{i \in D_{1}} R(i). \end{aligned}$$
(1)

Here both \(A\subset D_{2}\) and \(B\subset D_{2}\). This measures the content flow from the stimulus domain to the response domain.

Cool-off Period When creating the before corpus and after corpus, we only include texts from the two weeks before and two weeks after the split (i.e., the time of the reference text’s publication). We also exclude texts from the first 24 hours after the split. We introduce this “cool-off period” so that the social media interactions (e.g., likes, shares) with the reference text will already have accumulated before we measure their impact on subsequent discourse. For Facebook, it’s well known in industry that the vast majority of interactions take place within the first several hours after posting (variously quoted as “75% within 3hrs”, “90% within 12 hrs”, “a lifespan of 6hrs”, and “80% of total engagement within 24hrs” on industry blogs [1922]). For Twitter the timescale is known to be even shorter. We also compute our results using a 0hr and 48hr cool-off period and find they are substantially similar, suggesting that our results are robust to changes in choice of cool-off period.

Resampling Texts Due to differences in lengths of texts, the content flow from reference texts to corpuses, \(R(i)\), may not be comparable to one another and are difficult to interpret. For more easily interpretable Bray Curtis similarity measurements, and to correct for differences in text lengths between domains, we resampled each text to a standard length of 1000 words. That is, we resampled 1000 words from each text with replacement, and used these resampled lists of words to compute the Bray Curtis similarities. A sensitivity analysis of the number of words found similar results with any sufficiently large value (e.g., greater than 1000). This resampling method allows us to interpret a Bray Curtis similarity of 10−4 as “the texts share 1 word in 10,000,” and therefore we can interpret a content flow of 10−4 as “1 in 10,000 words in the responding domain were adopted from the reference text”.

2.3 Statistical analyses

2.3.1 Overall content flow between domains

To estimate the overall flow of words between domains, we compute the mean content flow over the set of texts for each category of reference text and response text (see Eq. (1)). We use the bootstrap method to generate error bars and corresponding p-values [23] for these overall mean-content flows.

2.3.2 Marginal change with interactions

To measure the marginal change of content flow associated with social media interaction metrics, we perform a linear regression from log-transformed reaction counts to the content flows we measured. The reaction counts must be log-transformed before performing the linear regression because their distributions are heavy-tailed (e.g., power-law distributions and log-normals), and the log-transformed distributions are appropriate for standard linear regression. The linear regression is defined in the standard way:

$$\begin{aligned} \vec{R} = X\vec{\beta} + \vec{\epsilon}, \end{aligned}$$
(2)

where R⃗ is a vector containing the measured content flows for each text, X is the data matrix of the log-transformed reaction counts associated with each text (and a constant column), β⃗ are the regression coefficients associated with each reaction (which we call the marginal change associated with the reaction), and ϵ⃗ is Gaussian noise associated with each text. We show an illustrative example of these regressions in Fig. 5.

Figure 5
figure 5

Word flow from Facebook to Commons speeches significantly increases with levels of Hahas. We show regression analyses of the marginal changes in content flow of Commons Speeches, associated with Facebook interactions. y-axis is content flow and x-axis is reaction count on a log scale. See Table 2 for p-values and slopes of significant effects

For the statistically-significant interactions, we present regression coefficients, 95% confidence windows, and p-values. We set a conservative Bonferroni-corrected p-value threshold at \(0.05/n\) where n is the number of distinct regressions we performed on the data. In this case \(n=12\) and the threshold is \(4.2\text{e-}3\). There are 12 separate regressions because we did one for each combination of the two stimulus domains (Articles on Facebook, MP Tweets), response domains (MP Tweets, Commons Speeches), and parties (Labour, Conservative, All Parties). These stimulus domains are chosen because they are the only domains that have social media interactions to measure the marginal change with, and these response domains were chosen because they are the domains in which only MPs communicate.

3 Results

3.1 Overall content flow between domains

We use content flow (Eq. (1)) to measure word flow between domains. A positive measurement indicates that words from the reference texts are promoted, becoming more common in the other domain. A negative measurement indicates that words from the reference text are suppressed, becoming less common in the other domain. We find evidence of both promotion and suppression of words between the three domains (see Fig. 6 and Table 1).

Figure 6
figure 6

Overall word transfer between domains, by party. Lines indicate statistically significant measurements. Colors indicate the party of the MPs for whom we’re measuring a response. Note that lines from Commons to Articles and lines from Tweets to Articles are omitted because Articles do not have clear party affiliations, and so these values are undefined. Line thickness is proportional to the log of the measured effect. See Table 1 for precise magnitudes, p-values, and 95% confidence intervals

Table 1 Overall mean content flows between domains. Error bars and p-values were obtained by bootstrapping with 104 resamples. FB stands for Facebook

We find evidence for word flow between domains. Specifically, we find Commons Speeches tend to use more words that previously appeared in Articles on Facebook (see RQ1). However, there are significant differences by Party (RQ4). In general Labour MPs use many more words from Articles on Facebook, while Conservative MPs avoid words used previously in Articles on Facebook. We also find that Commons Speeches use words previously used in MP Tweets, and this phenomenon is stronger for Labour. Finally, we find that MP Tweets tend to avoid language previously used in the Commons and Articles domains (RQ3); however this effect is not observed for Conservative MPs’ tweets (RQ4).

We also look at which specific words flow from Facebook articles to Commons speeches. To do this, we rank articles by their content flow to Commons speeches, \(R(i)\). Selecting the top 10% of these articles, we look at the words in those articles that were used more often relative to other articles. We found these words were generally about the important topics of the period studied. They included Brexit, the European Union, Theresa May, and customs union.

3.2 Marginal change with social media interactions

To study how public engagement with social media posts may have some impact (see RQ2), we looked at the marginal effects of social media interactions on the word flow from a stimulus domain to a responding domain. Specifically, we performed multiple linear regression from the log reaction counts in a stimulus domain to the content flows in the responding domain (see Eq. (2)). We present the regression coefficients as a measure of the marginal change to interactions in full in the Additional file 1. In general we find associations between levels of social media interactions and changes of the sizes of content flows (see Fig. 7 for significant changes).

Figure 7
figure 7

Marginal change in word transfer to Commons Speeches associated with different kinds of interactions. Lines indicate statistically significant measurements. Colors indicate the party of the MPs for whom we’re measuring a response. Line thickness is proportional to the log of the measured effect. See Table 2 for precise magnitudes, p-values, an 95% confidence intervals

Considering public engagement on Facebook, we find increased Facebook reaction counts are generally associated with increased word flow (RQ2). A multi-variate regression including all interactions finds only the laugh reactions (aka the “Haha” reaction) on Facebook are significant in light of the other interactions (see Table 2), after including a conservative Bonferroni correction to the p-value threshold. We estimate that an additional 1.4 words in 100,000 are adopted for a 10-fold increase in Hahas.

Table 2 Marginal change in word transfer between domains associated with logged counts of different kinds of interactions. Linear regression coefficients are the magnitude of the marginal change coefficient. Only statistically-significant interactions are included here; see Supporting Information for a table including all interactions

Considering public engagement with MP Tweets, we looked at content flow from Twitter to Commons Speeches (see Fig. 7). We find in general that increased word flow is associated with the counts of replies to MP tweets. We also found that the number of retweets, replies or quotes a tweet reduced word flow from those tweets to Conservative MPs Commons Speeches (RQ4).

We also consider MPs Tweets, and investigate how public engagement metrics Facebook or Twitter posts impact content flow to the MPs Tweets (see Fig. 8, and RQ2). Using a multi-variate regression to correct for all types of reaction, we find that increased counts of comments on Facebook are associated with increased word transfer, and that increased counts of retweets on Twitter are associated with decreased word transfer to future MP Tweets. We find that, in general, MPs tend to Tweet using words that occurred previously in posts that received many comments on Facebook, and avoid words from Tweets that had already received many retweets. We also find that Like, Click, and Love interactions are associated with decreased word flow to Twitter, regardless of whether or not we correct for the effects of other kinds of interactions.

Figure 8
figure 8

Marginal change in word transfer to MP Tweets associated with different kinds of interactions. Lines indicate statistically significant measurements. Colors indicate the party of the MPs for whom we’re measuring a response. Solid lines indicate a positive magnitude; dotted lines indicate a negative magnitude. Line thickness is proportional to the log of the measured effect. See Table 2 for precise magnitudes, p-values, an 95% confidence intervals

4 Discussion & conclusions

Through applying our content flow analysis method, we have found significant relationships between parliamentary discourse, news media content, and social media engagement metrics. Specifically we find that about 4 words in 100,000 (per news article) transfer from Facebook articles to Commons Speeches. We also find that 10-fold increases in Facebook reaction counts (Likes, Hahas, etc) can increase this word transfer by up to about 35% (or 1.4 words in 100,000), depending on the reaction type. Although 4 words in 100,000 per shared news article may seem small, we believe this is actually a rather large transfer rate because there are approximately 8000 UK news articles shared to Facebook every two weeks (the length of the observation window used in these measurements).

We also see clear party differences in overall content flow and to a lesser extent in marginal effects from public engagement with social media posts (Figs. 6, 7, 8). In general, statements by Labour MPs use more words from news media content and have larger marginal changes in word use associated with additional social media interactions. We note that Labour MPs and their constituents are more active on social media (see Fig. 2), and that there is a slight bias toward left-leaning articles (shown in Fig. 3). We hypothesize that this greater activity may explain the party differences we measure.

One might expect parliament to continue discussing content from one day in subsequent days. This was not detected by our measurements of the Commons (Stimulus) to Commons (Response) in Table 1. We do however find that individual parties (e.g., Conservative and Labour) maintain specific content over time more consistently. These results seem counter intuitive. We speculate that they may be explained by parties talking past each other and not taking up each others’ topics, e.g., by diversion, distraction, or changing the terms of discussion.

We also find that the statements by parliamentarians on Twitter do not have the same relationships to social media as their statements in the official proceedings. This finding contextualizes prior work that examined the statements of politicians on Twitter [4]. In particular, it indicates that studies of the dynamics of legislators’ Tweets may not have direct implications for their official legislative behavior. This distinction between the two political arenas, parliamentary question sessions, and Twitter, warrants further study.

We found that MPs statements on Twitter tend to avoid words previously used in the other domains (see Fig. 2.2). We hypothesize that this is due to Twitter’s faster response rate. Twitter discussions take place over minutes and hours, while discourse in Commons and news articles takes place over days or weeks. Because of this, Twitter discussions may lead the other two domains in content, and avoid content that has already been discussed previously because interest has already waned.

While we have found clear evidence of directional word flows between domains, it is unclear from our study if this relationship is causal. It is possible that social media engagement metrics are acting as a measurement device that gauges public interest and controversy, and it is the underlying public interest and controversy which entices MPs to pay attention to that content. We are unable to measure the underlying public interest and controversy independently of social media metrics, so we cannot control for these potential confounding variables and establish a clear causal link between social media and parliamentary discourse. Future studies may try to control for other confounding factors, such as other media sources or social processes.

The second major limitation of this work concerns the Facebook dataset. A large amount of Gaussian noise has been added to the reaction counts (e.g., number of likes, shares, etc.) by Facebook for the purpose of differential privacy. We believe this increases the p-values we calculated that are associated with our analysis of marginal changes with interactions, and therefore eliminates many otherwise-significant results. This effect is especially visible in Fig. 7, where measurements for all MPs are found to be significant (due to larger data volumes) but measures of individual parties are not found to be significant in some cases. In general, we suspect that the networks in Figs. 7 and 8 are significantly sparser than would be the case without the added noise. Another factor is the very conservative Bonferroni correction that we use, which also contributes to this sparsity of significant results.

Our results are part of a growing body of evidence that indicates that official legislative discourse in parliament follows public discussion of news on social media, and may be vulnerable to manipulations of social media. We have seen in recent years a number of high profile cases in which social media is believed to have played a large role in major popular political upheavals, such as the 2016 U.S. Presidential election [24, 25] and the 2016 UK Brexit referendum [26], and there is some evidence that deliberate information campaigns using bots and other inauthentic activity contributed significantly to the social media activity related to these political events [27]. While it is clear that public discourse playing a role in legislation has democratic value, vulnerabilities of social media are a concern. This highlights an ongoing need to protect legislative processes from social media manipulation.

Availability of data and materials

Code for data analysis is available at https://github.com/Jbollenbacher/parliament_and_facebook. Hansard parliament data are available from hansard.parliament.uk. MP Tweets data are available through the Twitter parliamentarian database [15]. The Condor Facebook dataset is available via the Facebook Online Research Tool platform, code for generating Facebook URL reaction data is available on the GitHub repository.

References

  1. Harder RA, Sevenans J, Van Aelst P (2017) Intermedia agenda setting in the social media age: how traditional players dominate the news agenda in election times. Int J Press/Polit 22(3):275–293. https://doi.org/10.1177/1940161217704969.

    Article  Google Scholar 

  2. Su Y, Borah P (2019) Who is the agenda setter? Examining the intermedia agenda-setting effect between Twitter and newspapers. J Inf Technol Polit 16(3):236–249. https://doi.org/10.1080/19331681.2019.1641451.

    Article  Google Scholar 

  3. Shapiro MA, Hemphill L (2017) Politicians and the policy agenda: does use of Twitter by the U.S. Congress direct New York times content?: politicians and the policy agenda. Policy Internet 9(1):109–132. https://doi.org/10.1002/poi3.120.

    Article  Google Scholar 

  4. Barberá P, Casas A, Nagler J, Egan PJ, Bonneau R, Jost JT, Tucker JA (2019) Who leads? who follows? Measuring issue attention and agenda setting by legislators and the mass public using social media data. Am Polit Sci Rev 113(4):883–901.

    Article  Google Scholar 

  5. Jungherr A, Rivero G, Gayo-Avello D (2020) Retooling politics: how digital media are shaping democracy, 1st edn. Cambridge University Press, Cambridge. https://doi.org/10.1017/9781108297820

    Book  Google Scholar 

  6. Leston-Bandeira C (2019) Parliamentary petitions and public engagement: an empirical analysis of the role of e-petitions. Policy Polit 47(3):415–436. https://doi.org/10.1332/030557319X15579230420117.

    Article  Google Scholar 

  7. Bernhard U, Dohle M (2015) Local politics online: the influence of presumed influence on local politicians’ online communication activities in Germany. Local Gov Stud 41(5):755–773. https://doi.org/10.1080/03003930.2015.1028624.

    Article  Google Scholar 

  8. Kreiss D (2016) Seizing the moment: the presidential campaigns’ use of Twitter during the 2012 electoral cycle. New Media Soc 18(8):1473–1490. https://doi.org/10.1177/1461444814562445.

    Article  Google Scholar 

  9. Daniel WT, Obholzer L (2020) Reaching out to the voter? Campaigning on Twitter during the 2019 European elections. Res Polit 7(2):205316802091725. https://doi.org/10.1177/2053168020917256.

    Article  Google Scholar 

  10. Bovet A, Makse HA (2019) Influence of fake news in Twitter during the 2016 US presidential election. Nat Commun 10(1):7. https://doi.org/10.1038/s41467-018-07761-2

    Article  Google Scholar 

  11. Edwards GC, Wood BD (1999) Who influences whom? the president, congress, and the media. Am Polit Sci Rev 93(2):327–344. https://doi.org/10.2307/2585399

    Article  Google Scholar 

  12. Borge-Holthoefer J, Perra N, Gonçalves B, González-Bailón S, Arenas A, Moreno Y, Vespignani A (2016) The dynamics of information-driven coordination phenomena: A transfer entropy analysis. Sci Adv 2(4):1501158. https://doi.org/10.1126/sciadv.1501158

    Article  Google Scholar 

  13. Ver Steeg G, Galstyan A (2012) Information transfer in social media. In: Proceedings of the 21st international conference on world wide web, vol WWW’12. ACM, New York, pp 509–518. https://doi.org/10.1145/2187836.2187906

    Chapter  Google Scholar 

  14. Bryden J, Wright SP, Jansen VAA (2018) How humans transmit language: horizontal transmission matches word frequencies among peers on Twitter. J R Soc Interface 15(139). https://doi.org/10.1098/rsif.2017.0738

  15. van Vliet L, Törnberg P, Uitermark J (2020) The Twitter parliamentarian database: analyzing Twitter politics across 26 countries. PLoS ONE 15(9):0237073. https://doi.org/10.1371/journal.pone.0237073

    Article  Google Scholar 

  16. Messing S, DeGregorio C, Hillenbrand B, King G, Mahanti S, Mukerjee Z, Nayak C, Persily N, State B, Wilkins A (2021) Facebook Privacy-Protected Full URLs Data Set. Harvard Dataverse. https://doi.org/10.7910/DVN/TDOAPG

  17. King G, Persily N (2020) A new model for industry–academic partnerships. PS Polit Sci Polit 53(4):703–709. https://doi.org/10.1017/S1049096519001021

    Article  Google Scholar 

  18. Bray JR, Curtis JT (1957) An ordination of the upland forest communities of southern Wisconsin. Ecol Monogr 27(4):325–349. https://doi.org/10.2307/1942268

    Article  Google Scholar 

  19. OptimalSocial: 75% of Facebook Engagement Is in the First 180 Minutes, Says Facebook Competition. (2013) https://venturebeat.com/2013/03/28/75-of-facebook-engagement-is-in-the-first-180-minutes-says-facebook-competition-winning-tool

  20. Ayres S (2013) Shocking New Data about the Lifespan of Your Facebook Posts. https://www.postplanner.com/lifespan-of-facebook-posts/

  21. Symonds J (2021) Lifespan of Social Media Posts in 2021: How Long Do They Last? https://the-refinery.io/blog/how-long-does-a-social-media-post-last

  22. Corless J (2020) Measuring Your Social Media Enagement. https://universitymarketing.osu.edu/blog/measuring-your-social-engagement.html

  23. Efron B, Tibshirani RJ (1993) An introduction to the bootstrap. Springer, Boston. https://doi.org/10.1007/978-1-4899-4541-9

    Book  MATH  Google Scholar 

  24. Allcott H, Gentzkow M (2017) Social media and fake news in the 2016 election. J Econ Perspect 31(2):211–236. https://doi.org/10.1257/jep.31.2.211

    Article  Google Scholar 

  25. Howard PN, Woolley S, Calo R (2018) Algorithms, bots, and political communication in the US 2016 election: the challenge of automated political communication for election law and administration. J Inf Technol Polit 15(2):81–93. https://doi.org/10.1080/19331681.2018.1448735

    Article  Google Scholar 

  26. Hänska-Ahy M, Bauchowitz S (2017) Tweeting for Brexit: how social media influenced the referendum. In: Brexit, Trump and the media. Abramis Academic Publishing, Bury St Edmunds

    Google Scholar 

  27. Gorodnichenko Y, Pham T, Talavera O (2021) Social media, sentiment and public opinions: evidence from #Brexit and #USElection. Eur Econ Rev 136:103772. https://doi.org/10.1016/j.euroecorev.2021.103772

    Article  Google Scholar 

Download references

Acknowledgements

Thank you to Lewis Westbury, Tam Borine, Edward Saperia, Michael Smethurst, Anya Somerville, and Vincent Jansen for helpful comments and assistance. The authors would also like to thank Livia van Vliet for sharing data from the Twitter Parliamentarian Database.

Funding

This work is supported in part by the Social Media and Democracy Research Grants which are administered by the Social Science Research Council, the Knight Foundation, and Craig Newmark Philanthropies.

Author information

Authors and Affiliations

Authors

Contributions

JBo contributed to data analysis, experiment design, and writing. JBr contributed to project conception and direction, data acquisition, and writing. NL contributed to data acquisition, data cleaning, and related literature review. All authors read and approved the final manuscript.

Corresponding author

Correspondence to John Bryden.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary information (PDF 53 kB)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bollenbacher, J., Loynes, N. & Bryden, J. Does United Kingdom parliamentary attention follow social media posts?. EPJ Data Sci. 11, 51 (2022). https://doi.org/10.1140/epjds/s13688-022-00364-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1140/epjds/s13688-022-00364-4

Keywords