Skip to main content

Table 4 Influence of fact-checking articles on UTDRM. The highest scores for each dataset and metric are in bold

From: UTDRM: unsupervised method for training debunked-narrative retrieval models

Datasets

Metrics

Fact-checking Articles

1K

5K

10K

All

Snopes

MAP@1

0.750

0.794

0.803

0.831

MRR

0.810

0.842

0.865

0.890

CLEF 22 2A

MAP@1

0.871

0.914

0.919

0.933

MRR

0.904

0.932

0.936

0.948

CLEF 21 2A

MAP@1

0.851

0.891

0.906

0.906

MRR

0.895

0.925

0.937

0.936

CLEF 20 2A

MAP@1

0.894

0.940

0.935

0.945

MRR

0.931

0.957

0.957

0.961

Average

MAP@1

0.842

0.885

0.891

0.904

Twitter-based

MRR

0.885

0.914

0.924

0.934

Politifact

MAP@1

0.428

0.484

0.508

0.516

MRR

0.541

0.602

0.618

0.627

CLEF 22 2B

MAP@1

0.285

0.362

0.377

0.392

MRR

0.381

0.436

0.450

0.467

CLEF 21 2B

MAP@1

0.259

0.323

0.335

0.348

MRR

0.346

0.390

0.402

0.422

Average

MAP@1

0.324

0.390

0.407

0.419

Political-based

MRR

0.423

0.476

0.490

0.505