Name of the resource - possibly short name and long form

Short description of the resource - which corpus, lexicon, tool, experimental data

Emotion annotations in COCA


Experimental Data


Enrica Troiano, Sebastian Padó and Roman Klinger


We make available the annotations of the 700 sentences used in Troiano et al. (2021). The sentences are extracted from the Corpus of Contemporary American English (Davies, 2015).

When humans judge the affective content of texts, they also implicitly assess the correctness of such judgment, that is, their confidence. We hypothesize that people’s (in)confidence that they performed well in an annotation task leads to (dis)agreements among each other. If this is true, confidence may serve as a diagnostic tool for systematic differences in annotations. To probe our assumption, we conduct a study on a subset of the Corpus of Contemporary American English, in which we ask raters to distinguish neutral sentences from emotion-bearing ones, while scoring the confidence of their answers. Confidence turns out to approximate inter-annotator disagreements. Further, we find that confidence is correlated to emotion intensity: perceiving stronger affect in text prompts annotators to more certain classification performances. This insight is relevant for modelling studies of intensity, as it opens the question wether automatic regressors or classifiers actually predict intensity, or rather human’s self-perceived confidence.


Enrica Troiano, Sebastian Padó, and Roman Klinger. Emotion ratings: How intensity, annotation confidence and agreements are entangled. In Proceedings of the 11th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 2021.

This image shows Roman Klinger
PD Dr.

Roman Klinger

Senior Lecturer (Akademischer Oberrat)

This image shows Sebastian Padó
Prof. Dr.

Sebastian Padó

Chair of Theoretical Computational Linguistics, Managing Director of the IMS

To the top of the page