Abstracts

Default-Text der hier stehen soll ...

Resolving Abstract Anaphora Signalled by Anaphoric Shell Nouns

Varada Kolhatkar and Graeme Hirst (Toronto)

Noun phrases such as this issue, this fact, and this reason are anaphoric (or cataphoric) as they need help of the preceding (or sometimes following) text for correct interpretation. Such noun phrases occur frequently in written texts and generally have antecedents that are second-order or third-order entities. We call abstract nouns such as fact, reason, and issue anaphoric shell nouns. An example is given below, where the anaphor this issue has a non-nominal abstract antecedent marked in bold.

(1) There is a controversial debate whether back school program might improve quality of life in back pain patients. This study aimed to address this issue.

Anaphoric shell nouns play an important role in discourse understanding such as linking and topic changing (Schmid 2000). But despite their importance, current research in the resolution of such anaphors is rather limited. To get a good grasp of this problem, we study the focussed problem of this-issue anaphora resolution in the medical domain. We propose a candidate ranking model for this-issue anaphora resolution that explores different issue-specific and general abstract-anaphora resolution features. The model is not restricted to nominal or verbal antecedents; rather, it is able to identify antecedents that are arbitrary spans of text. Our results show that (a) the model outperforms the strong adjacent-sentence baseline; (b) general abstract-anaphora features, as distinguished from issue-specific features, play a crucial role in this-issue anaphora resolution, suggesting that our approach can be generalized for other NPs such as this problem and this fact; and (c) it is possible to reduce the search space in order to improve performance.

Active Learning for Coreference Resolution

Florian Laws (Suttgart)

We present an active learning method for coreference resolution that is novel in three respects. (i) It uses bootstrapped neighborhood pooling, which ensures a class-balanced pool even though gold labels are not available. (ii) It employs neighborhood selection, a selection strategy that ensures coverage of both positive and negative links for selected markables. (iii) It is based on a query-by-committee selection strategy in contrast to earlier uncertainty sampling work. Experiments show that this new method outperforms random sampling in terms of both annotation effort and peak performance.

Machine Learning for Coreference Resolution: A Decade of Exploration

Vincent Ng (Dallas)

Noun phrase coreference resolution is the task of determining the noun phrases that refer to the same real-world entity in a text or dialogue. It is generally considered one of the most challenging tasks in natural language processing, owing in part to its reliance on sophisticated knowledge sources and inference mechanisms. With the advent of the statistical NLP era, researchers have begun investigating machine learning approaches to coreference resolution. One of the earliest and most well known learning-based coreference resolvers, Soon et al.'s (2001) system, employs the so-called mention-pair model, which has several major weaknesses given its overly simplistic modeling assumption. Worse still, by adopting a knowledge-lean approach, their system is unable to handle anaphora whose resolution requires sophisticated world knowledge. We show how our work over the past decade has enabled us to (1) design a learning-based coreference model that overcomes the major weaknesses of the mention-pair model; and (2) acquire and exploit sophisticated knowledge for resolving complex cases of anaphora. We will conclude with a discussion of some promising research directions in this area.

This talk is part of the guest lecture series of SFB 732.

Annotation of Anaphoric Shell Nouns Using Crowdsourcing

Varada Kolhatkar (Toronto) and Heike Zinsmeister (Stuttgart)

"Shell nouns" such as this fact often refer to non-nominal antecedents in the text (Schmid 2000). This fact makes their resolution very challenging.

Kolhatkar and Hirst (2012) demonstrated the possibility of resolving such anaphoric shell nouns (see also Thursday's talk by Varada Kolhatkar). At present, the major obstacle is that there is very little annotated data available that could be used to train an abstract anaphora resolution system.

In this talk, we will focus on creating a large-scale corpus for abstract anaphora using crowdsourcing. The results of our preliminary experiments were encouraging with reasonable inter-annotator agreement.

Multigraph Models for Coreference Resolution

Sebastian Martschat (Heidelberg)

We present a supervised multigraph model for coreference resolution and perform an error analysis of our model's output on the data of the CoNLL-2012 shared task. In the second part of the talk we present two variants of our model where we gradually move to an unsupervised setting.

The IMS CoNLL 2012 Contribution and Initial Experiments on English Domain Adaptation

Anders Björkelund (Stuttgart)

I will describe the system with which the IMS participated in this years CoNLL shared task "Modeling Multilingual Unrestricted Coreference in OntoNotes". The system obtained the second best overall rank in the Shared Task. I will then present some initial experiments on domain adaptation on the English data from the shared task using our system.

Multilingual Coreference Resolution

Desislava Zhechova (Indiana) and Sandra Kübler (Indiana /Tübingen)

Our talk will discuss the ways in which coreference resolution (CR) can be transformed into a multilingual task. We are looking deeper into the problems that occur within a machine learning based CR system when more than one language is targeted. In particular we will concentrate on one of the aspects of the multilingual pipeline – mention detection (MD). We will present a range of methods for MD that can be employed within a multilingual CR system. We cover a diversity of methods in rule-based, machine learning as well as in hybrid approaches. We will present an extensive evaluation within two distinct datasets across 8 different languages.

Collective Classification for Fine-grained Information Status

Yufang Hou (Heidelberg)

Previous work on classifying information status is restricted to coarse-grained (three-way) classification and focuses on conversational dialogue. We here introduce the task of classifying fine-grained information status and work on written text. To this end, we add a fine-grained information status layer to the WallStreet Journal portion of the OntoNotes corpus. We claim that the information status of a mention depends not only on the mention itself but also on other mentions in the vicinity and therefore solve the task by collectively classifying the information status of all mentions.

Automatically Acquiring Fine-grained Information Status Distinctions in German

Arndt Riester (Stuttgart)

(Joint work with Aoife Cahill, ETS Princeton) We present a model for automatically predicting information status labels for German referring expressions. We train a CRF on manually annotated phrases, and predict a fine-grained set of labels. We achieve an accuracy score of 69.56% on our most detailed label set, 76.62% when gold standard coreference is available.

Zum Seitenanfang