Skip to main content

Table 4 Account-level classification performance on the politicians’ withheld tweets in our testing set. We never include these tweets in a training data set, but our training set does include other tweets by the accounts that posted these tweets. In each entry, we show the accuracy followed by the \(F_{1}\) score. We show the best results for each column in bold. The names of our models are also in bold. The Skip-Gram row indicates our Baseline PEM results

From: Detecting political biases of named entities and hashtags on Twitter

Model

Results Based on \(\mathbf{z}^{(s)}\) (accuracy; \(F_{1}\))

Results Based on \(\mathbf{z}^{(p)}\) (accuracy; \(F_{1}\))

Skip-Gram

0.8394; 0.8451

0.8457; 0.8503

Polarized PEM

0.8994; 0.9008

0.9861; 0.9872

Complete PEM

0.8111; 0.8204

0.9931; 0.9936