Skip to main content

Table 2 Classification performance on the withheld tweets of politicians and the Twitter accounts of politicians. The subscript “\(\mathrm{no\ attn}\)” signifies that we use the mean value of \(\{\mathbf{z}^{(p)}\}\) directly (i.e., without applying an attention mechanism). Skip-Gram (i.e., the Baseline PEM model) and GloVe use a pretrained embedding with the same MLP binary classifier as in our discriminator. (To train this classifier, we use a training set that includes 80% of the politicians’ tweets.) In each entry, we show the accuracy followed by the \(F_{1}\) score. We show the best results for each column in bold. The names of our models are also in bold

From: Detecting political biases of named entities and hashtags on Twitter

Model

Tweet-Level Results (accuracy; \(F_{1}\))

Account-Level Results (accuracy; \(F_{1}\))

Skip-Gram

0.7705; 0.7736

0.8769; 0.8797

GloVe

0.7438; 0.7453

0.8578; 0.8620

BERTbase

0.8595; 0.8603

0.9965; 0.9968

BERTweet

0.8399; 0.8435

0.9844; 0.9853

Polarized PEM\(_{ \mathrm{no\ attn} }\)

0.7681; 0.7682

0.9757; 0.9758

Complete PEM\(_{ \mathrm{no\ attn} }\)

0.7991; 0.7994

0.9827; 0.9827

Polarized PEM

0.8339; 0.8337

0.9861; 0.9872

Complete PEM

0.8338; 0.8330

0.9931; 0.9936