Appraisal-based Emotion Analysis

Corpora and Models

Appraisal-based Emotion Analysis

Type

Corpus

Author

Jan Hofmann, Enrica Troiano, Kai Sassenberg, Laura Oberländer, Maximilian Wegge, Roman Klinger

Description

Corpora and Models for Appraisal Classification for Emotion Analysis

Reference

Jan Hofmann, Enrica Troiano, Kai Sassenberg, and Roman Klinger. Appraisal Theories for Emotion Classification in Text. Proceedings of the 28th International Conference on Computational Linguistics (COLING). 2020. https://aclanthology.org/2020.coling-main.11/

Jan Hofmann, Enrica Troiano, and Roman Klinger. Emotion-aware, emotion-agnostic, or automatic: A study of manual annotation strategies for cognitive event appraisal and the influence on model performance. In Proceedings of the 11th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 2021. https://aclanthology.org/2021.wassa-1.17.pdf

Enrica Troiano, Laura Oberländer, Maximilian Wegge, and Roman Klinger.  A Corpus of Event Descriptions with Experiencer-specific Emotion and Appraisal Annotations. Under Review for LREC 2022. https://arxiv.org/abs/2203.10909

Enrica Troiano, Laura Oberländer, and Roman Klinger. Appraisal Theories for Dimensional Modelling of Emotions in Text. Under Review for the Computational Linguistics Journal. 2022

Download
  • The original data published in the COLING paper is available at https://www.romanklinger.de/data-sets/appraisalEnISEAR.zip. Code is available at https://github.com/bluzukk/appraisal-emotion-classification/. In this paper, we reannotated the enISEAR with seven appraisal dimensions from the perspective of the original author.
  • The data including the results of the annotation experiments from the WASSA paper can be found at https://www.romanklinger.de/data-sets/HofmannTroianoKlinger-Appraisal-Data.zip. The goal of this work has been to evaluate if appraisals can also be automatically assigned based on the existing emotion categories.
  • The data from the LREC paper (accepted) is available at https://www.romanklinger.de/data-sets/x-enVENT.zip. These data is also a reannotation of the enISEAR data (+ some other resources, but only few instances). The difference to the COLING paper is two-fold: We have ~20 appraisal dimensions and we annotated experiencer-specificly – multiple people who participate in an event get each their own annotation.
  • The corpus crowd-enVENT (described in a paper submitted to the CL Journal for review) is a corpus of event reports that describe events that caused particular emotions. Each author of an event description also reports their own appraisal, according to 21 variables. A preprint of the paper will be linked here soon, a link to the code repository will be made available in July. You can find this data, including a more detailed description, at https://www.romanklinger.de/data-sets/crowd-enVent2022.zip
This image shows Roman Klinger

Roman Klinger

PD Dr.

Senior Lecturer (Akademischer Oberrat)

To the top of the page