Skip to main content

Table 6 Tweet-level classification performance on the politicians’ withheld tweets in our testing set when we assign polarity scores to all tokens versus only assigning polarity scores to named entities and hashtags. In each entry, we show the accuracy followed by the \(F_{1}\) score. We show the best results for each column in bold

From: Detecting political biases of named entities and hashtags on Twitter

Results (accuracy; \(F_{1}\))

Polarized PEM

Complete PEM

Using \(\mathbf{z}^{(p)}\) of All Tokens

0.8369; 0.8366

0.8337; 0.8334

Using \(\mathbf{z}^{(p)}\) of Only Entities and Hashtags

0.8339; 0.8337

0.8338; 0.8330