From: Detecting political biases of named entities and hashtags on Twitter
Model | Tweet-Level Results (accuracy; \(F_{1}\)) | Account-Level Results (accuracy; \(F_{1}\)) |
---|---|---|
Skip-Gram | 0.5822; 0.5636 | 0.6660; 0.6604 |
GloVe | 0.5680; 0.5491 | 0.6486; 0.6372 |
BERTbase | 0.6541; 0.6280 | 0.7234; 0.7218 |
BERTweet | 0.6284; 0.6486 | 0.7836; 0.7778 |
Polarized PEM\(_{ \mathrm{no\ attn} }\) | 0.6066; 0.6244 | 0.8157; 0.8196 |
Complete PEM\(_{ \mathrm{no\ attn} }\) | 0.6061; 0.6258 | 0.8494; 0.8475 |
Polarized PEM | 0.6308; 0.6965 | 0.8493; 0.8758 |
Complete PEM | 0.6479; 0.6987 | 0.8612; 0.8870 |