Improving Large-Scale Fact-Checking using Decomposable Attention Models and Lexical Tagging

Published in The Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018

[PDF] [Poster]


@InProceedings{D18-1143,
  author = 	"Lee, Nayeon
		and Wu, Chien-Sheng
		and Fung, Pascale",
  title = 	"Improving Large-Scale Fact-Checking using Decomposable Attention Models and Lexical Tagging",
  booktitle = 	"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
  year = 	"2018",
  publisher = 	"Association for Computational Linguistics",
  pages = 	"1133--1138",
  location = 	"Brussels, Belgium",
}

Abstract

Fact-checking of textual sources needs to effectively extract relevant information from large knowledge bases. In this paper, we extend an existing pipeline approach to better tackle this problem. We propose a neural ranker using a decomposable attention model that dynamically selects sentences to achieve promising improvement in evidence retrieval F1 by 38.80%, with ($\times$65) speedup compared to a TF-IDF method. Moreover, we incorporate lexical tagging methods into our pipeline framework to simplify the tasks and render the model more generalizable. As a result, our framework achieves promising performance on a large-scale fact extraction and verification dataset with speedup.