The Hitchhikers Guide to Testing Statistical Significance in Natural Language Processing
Published in Association for Computational Linguistics (ACL 2018), 2018
Recommended citation: "The Hitchhikers Guide to Testing Statistical Significance in Natural Language Processing." Rotem Dror, Gili Baumer, Segev Shlomov and Roi Reichart. Association for Computational Linguistics (ACL 2018). https://www.aclweb.org/anthology/papers/P/P18/P18-1128/
Abstract Statistical significance testing is a standard statistical tool designed to ensure that experimental results are not coincidental. In this opinion/ theoretical paper we discuss the role of statistical significance testing in Natural Language Processing (NLP) research. We establish the fundamental concepts of significance testing and discuss the specific aspects of NLP tasks, experimental setups and evaluation measures that affect the choice of significance tests in NLP research. Based on this discussion we propose a simple practical protocol for statistical significance test selection in NLP setups and accompany this protocol with a brief survey of the most relevant tests. We then survey recent empirical papers published in ACL and TACL during 2017 and show that while our community assigns great value to experimental results, statistical significance testing is often ignored or misused. We conclude with a brief discussion of open issues that should be properly addressed so that this important tool can be applied. in NLP research in a statistically sound manner.
Github https://github.com/rtmdrr/testSignificanceNLP