Coding coherence relations: Reliability and validity

W.P.M.S. Spooren, L. Degand

    Research output: Contribution to JournalArticleAcademicpeer-review


    This paper tackles the issue of the validity and reliability of coding discourse phenomena in corpus-based analyses. On the basis of a sample analysis of coherence relation annotation that resulted in a poor kappa score, we describe the problem and put it into the context of recent literature from the field of computational linguistics on required intercoder agreement. We describe our view on the consequences of the current state of the art and suggest three routes to follow in the coding of coherence relations: double coding (including discussion of disagreements and explicitation of the coding decisions), single coding (including the risk of coder bias, and a lack of generalizability), and enriched kappa statistics (including observed and specific agreement, and a discussion of the (possible reasons for) disagreement). We end with a plea for complimentary techniques for testing the robustness of our data with the help of automatic (text mining) techniques. © 2010 Walter de Gruyter GmbH & Co. KG.
    Original languageEnglish
    Pages (from-to)241-266
    Number of pages26
    JournalCorpus Linguistics and Linguistic Theory
    Issue number2
    Publication statusPublished - 2010


    Dive into the research topics of 'Coding coherence relations: Reliability and validity'. Together they form a unique fingerprint.

    Cite this