From: UTDRM: unsupervised method for training debunked-narrative retrieval models
Datasets | Metrics | Entity Inoculation | UTDRM | |||
---|---|---|---|---|---|---|
GPE | PERSON | ORG | Combine | Default | ||
Snopes | MAP@1 | 0.831 | 0.831 | 0.841 | 0.821 | 0.831 |
MRR | 0.889 | 0.891 | 0.893 | 0.881 | 0.890 | |
CLEF 22 2A | MAP@1 | 0.928 | 0.919 | 0.923 | 0.919 | 0.933 |
MRR | 0.942 | 0.936 | 0.942 | 0.935 | 0.948 | |
CLEF 21 2A | MAP@1 | 0.916 | 0.901 | 0.901 | 0.906 | 0.906 |
MRR | 0.940 | 0.929 | 0.932 | 0.932 | 0.936 | |
CLEF 20 2A | MAP@1 | 0.940 | 0.930 | 0.940 | 0.935 | 0.945 |
MRR | 0.957 | 0.955 | 0.958 | 0.955 | 0.961 | |
Average | MAP@1 | 0.904 | 0.895 | 0.901 | 0.895 | 0.904 |
Twitter-based | MRR | 0.932 | 0.928 | 0.931 | 0.926 | 0.934 |
Politifact | MAP@1 | 0.492 | 0.527 | 0.512 | 0.512 | 0.516 |
MRR | 0.613 | 0.637 | 0.631 | 0.633 | 0.627 | |
CLEF 22 2B | MAP@1 | 0.415 | 0.400 | 0.415 | 0.423 | 0.392 |
MRR | 0.482 | 0.471 | 0.482 | 0.495 | 0.467 | |
CLEF 21 2B | MAP@1 | 0.367 | 0.354 | 0.367 | 0.373 | 0.348 |
MRR | 0.433 | 0.423 | 0.433 | 0.442 | 0.422 | |
Average | MAP@1 | 0.425 | 0.427 | 0.431 | 0.436 | 0.419 |
Political-based | MRR | 0.509 | 0.510 | 0.515 | 0.524 | 0.505 |