Skip to main content

Table 3 Classification performance on the unobserved accounts. We never include tweets from these accounts in a training data set. In each entry, we show the accuracy followed by the \(F_{1}\) score. We show the best results for each column in bold. The names of our models are also in bold

From: Detecting political biases of named entities and hashtags on Twitter

Model

Tweet-Level Results (accuracy; \(F_{1}\))

Account-Level Results (accuracy; \(F_{1}\))

Skip-Gram

0.5822; 0.5636

0.6660; 0.6604

GloVe

0.5680; 0.5491

0.6486; 0.6372

BERTbase

0.6541; 0.6280

0.7234; 0.7218

BERTweet

0.6284; 0.6486

0.7836; 0.7778

Polarized PEM\(_{ \mathrm{no\ attn} }\)

0.6066; 0.6244

0.8157; 0.8196

Complete PEM\(_{ \mathrm{no\ attn} }\)

0.6061; 0.6258

0.8494; 0.8475

Polarized PEM

0.6308; 0.6965

0.8493; 0.8758

Complete PEM

0.6479; 0.6987

0.8612; 0.8870