From: Detecting political biases of named entities and hashtags on Twitter
Model | Tweet-Level Results (accuracy; \(F_{1}\)) | Account-Level Results (accuracy; \(F_{1}\)) |
---|---|---|
Skip-Gram | 0.7705; 0.7736 | 0.8769; 0.8797 |
GloVe | 0.7438; 0.7453 | 0.8578; 0.8620 |
BERTbase | 0.8595; 0.8603 | 0.9965; 0.9968 |
BERTweet | 0.8399; 0.8435 | 0.9844; 0.9853 |
Polarized PEM\(_{ \mathrm{no\ attn} }\) | 0.7681; 0.7682 | 0.9757; 0.9758 |
Complete PEM\(_{ \mathrm{no\ attn} }\) | 0.7991; 0.7994 | 0.9827; 0.9827 |
Polarized PEM | 0.8339; 0.8337 | 0.9861; 0.9872 |
Complete PEM | 0.8338; 0.8330 | 0.9931; 0.9936 |