Skip to main content

Table 6 Different evaluations metrics for LR classifier evaluated on different size classes of the US dataset and trained using features separately for each layer. Best scores for each row are written in bold

From: A multi-layer approach to disinformation detection in US and Italian news spreading on Twitter

Size class

Metric

Quotes

Retweets

Mentions

Replies

[0,100)

AUROC

0.75 ± 0.02

0.63 ± 0.02

0.75 ± 0.02

0.61 ± 0.02

Precision

0.71 ± 0.02

0.59 ± 0.02

0.70 ± 0.02

0.60 ± 0.04

Recall

0.66 ± 0.01

0.55 ± 0.01

0.67 ± 0.01

0.54 ± 0.02

F1-score

0.66 ± 0.02

0.53 ± 0.02

0.68 ± 0.02

0.50 ± 0.06

[100,1000)

AUROC

0.81 ± 0.02

0.63 ± 0.02

0.81 ± 0.02

0.65 ± 0.03

Precision

0.73 ± 0.02

0.61 ± 0.02

0.75 ± 0.02

0.65 ± 0.02

Recall

0.73 ± 0.02

0.60 ± 0.02

0.75 ± 0.02

0.62 ± 0.02

F1-score

0.73 ± 0.02

0.60 ± 0.02

0.75 ± 0.02

0.60 ± 0.02

[1000,+∞)

AUROC

0.85 ± 0.08

0.62 ± 0.08

0.84 ± 0.04

0.66 ± 0.06

Precision

0.80 ± 0.08

0.61 ± 0.08

0.75 ± 0.06

0.61 ± 0.10

Recall

0.80 ± 0.08

0.60 ± 0.07

0.75 ± 0.06

0.59 ± 0.07

F1-score

0.79 ± 0.08

0.59 ± 0.08

0.75 ± 0.06

0.58 ± 0.09

[0,+∞)

AUROC

0.76 ± 0.01

0.62 ± 0.01

0.77 ± 0.01

0.59 ± 0.04

Precision

0.70 ± 0.01

0.58 ± 0.01

0.73 ± 0.01

0.59 ± 0.05

Recall

0.69 ± 0.01

0.56 ± 0.01

0.71 ± 0.01

0.55 ± 0.03

F1-score

0.69 ± 0.01

0.53 ± 0.01

0.71 ± 0.01

0.52 ± 0.05