Skip to main content

Table 5 Influence of the generated claims on UTDRM. The highest scores for each dataset and metric are in bold

From: UTDRM: unsupervised method for training debunked-narrative retrieval models

Datasets

Metrics

Generated Claims

N = 2

N = 6

N = 10

N = 20

Snopes

MAP@1

0.821

0.831

0.830

0.829

MRR

0.881

0.890

0.890

0.889

CLEF 22 2A

MAP@1

0.914

0.933

0.933

0.933

MRR

0.934

0.948

0.948

0.949

CLEF 21 2A

MAP@1

0.906

0.906

0.901

0.896

MRR

0.936

0.936

0.932

0.931

CLEF 20 2A

MAP@1

0.935

0.945

0.945

0.945

MRR

0.957

0.961

0.963

0.964

Average

MAP@1

0.894

0.904

0.902

0.901

Twitter-based

MRR

0.927

0.934

0.933

0.933

Politifact

MAP@1

0.508

0.516

0.500

0.496

MRR

0.616

0.627

0.619

0.615

CLEF 22 2B

MAP@1

0.362

0.392

0.400

0.392

MRR

0.441

0.467

0.473

0.468

CLEF 21 2B

MAP@1

0.323

0.348

0.354

0.335

MRR

0.394

0.422

0.424

0.416

Average

MAP@1

0.397

0.419

0.418

0.408

Political-based

MRR

0.484

0.505

0.505

0.499