Institute

Studying

Research


 

Colloquium for Computational Linguistics and Linguistics in Stuttgart

The colloquium hosts talks of external guest speakers and visitors in the (computational) linguistics departments in Stuttgart.

Schedule for Winter Term 2018/19

Date Time Speaker & Title Room Host(s)
13.09.2018 11.30 Peter Turney (Independent Researcher):
Natural Selection of Words: Finding the Features of Fitness
FZI, V5.01 Dominik Schlechtweg,
Sabine Schulte im Walde
07.11.2018 11.30 Dan Jurafsky (Stanford University):
Computational Extraction of Social Meaning from Language
CS, V38.04 (ground floor) Gabriella Lapesa
09.11.2018 14.00 Yadollah Yaghoobzadeh (Microsoft Research):
Distributed Representations for Fine-Grained Entity Typing
FZI, 02.026 Thang Vu
12.11.2018 14.00 Diana McCarthy (University of Cambridge):
Word Sense Models: From static and discrete to dynamic and continuous
FZI, V5.02 Dominik Schlechtweg
13.11.2018 17.30 Markus Steinbach (Universität Göttingen):
Iconicity in narration. The linguistic meaning of gestures.
K2, 17.24 Daniel Hole
26.11.2018 14.00 Sven Büchel (Universität Jena):
From Sentiment to Emotion: Challenges of a More Fine-Grained Analysis of Affective Language
FZI, V5.02 Roman Klinger
29.11.2018 15.45 Anders Søgard (University of Copenhagen):
Hegel's Holiday: an Argument for a Less Empiricist NLP?
FZI, V5.01 Jonas Kuhn
18.12.2018 17.30 Judith Degen (Stanford University):
On the natural distribution of "some" and "or": consequences for theories of scalar implicature
K2, 17.24 Judith Tonhauser
28.01.2019 14.00 Anna Hätty (Universität Stuttgart/BOSCH) FZI, V5.01 Sabine Schulte im Walde


Abstracts

Peter Turney (joint work with Saif M. Mohammad):
Natural Selection of Words: Finding the Features of Fitness
(Thu, Sep 13, 2018)

According to WordNet, clarity, clearness, limpidity, lucidity, lucidness, and pellucidity are synonymous; all of them mean free from obscurity and easy to understand. Google Books Ngram Viewer shows that clearness was, by far, the most popular member of this synset (synonym set) from 1800 to 1900 AD. After 1900, the popularity of clarity rose, surpassing clearness in 1934. By 1980, clarity was, by far, the most popular member of the synset and clearness had dropped down to the low level of lucidity. We view this competition among words as analogous to biological evolution by natural selection. The leading word in a synset is like the leading species in a genus. The number of tokens of a word in a corpus corresponds to the number of individuals of a species in an environment. In both cases, natural selection determines which word or species will dominate a synset or genus. Species in a genus compete for resources in similar environments, just as words in a synset compete to represent similar meanings. We present an algorithm that is able to predict when the leading member of a synset will change, using features based on a word’s length, its characters, and its corpus statistics. The algorithm also gives some insight into what causes a synset’s leader to change. We evaluate the algorithm with 9,000 synsets, containing 22,000 words. In a 50 year period, about 12 to 14 percent of the synsets experience a change in leadership. We can predict changes 50 years ahead with an F-score of 46 percent, whereas random guessing yields 14 to 19 percent. This line of research contributes to the sciences of evolutionary theory and computational linguistics, but it may also lead to practical applications in natural language generation and understanding. Evolutionary trends in language are the result of many individuals, making many decisions about which word to use to express a given idea in a given situation. A model of the natural selection of words can help us to understand how such decisions are made, which will enable computers to make better decisions about language use. Modeling trends in words will also be useful in advertising and in analysis of social networks.

Bio: Dr. Peter Turney is an independent researcher and writer in Gatineau, Quebec. He was a Principal Research Officer at the National Research Council of Canada (NRC), where he worked from 1989 to 2014. He was then a Senior Research Scientist at the Allen Institute for Artificial Intelligence (AI2), where he worked from 2015 to 2017. He has conducted research in AI for over 27 years and has more than 100 publications with more than 18,000 citations. He received a Ph.D. in philosophy from the University of Toronto in 1988, specializing in philosophy of science. He has been an Editor of Canadian Artificial Intelligence magazine, an Editorial Board Member, Associate Editor, and Advisory Board Member of the Journal of Artificial Intelligence Research, and an Editorial Board Member of the journal Computational Linguistics. He was the Editor of the ACL Wiki from 2006, when it began, up to 2017. He was an Adjunct Professor at the University of Ottawa, School of Electrical Engineering and Computer Science, from 2004 to 2015.

Dan Jurafsky:
Computational Extraction of Social Meaning from Language
(Wed, Nov 7, 2018)

I give an overview of research from our lab on computationally extracting social meaning from language, meaning that takes into account social relationships between people. I'll describe our study of interactions between police and community members in traffic stops recorded in body-worn camera footage, using language to measure interaction quality, study the role of race, and draw suggestions for going forward in this fraught area. I'll describe computational methods for studying how meaning changes over time and new work on using these models to study historical societal biases and cultural preconceptions. And I'll discuss our work on framing, including agenda-setting in government-controlled media and framing of gender on social media. Together, these studies highlight how computational methods can help us interpret some of the latent social content behind the words we use.

Yadollah Yaghoobzadeh:
Distributed Representations for Fine-Grained Entity Typing
(Fri, Nov 9, 2018)

Extracting information about entities remains an important research area. In this talk, I address the problem of fine-grained entity typing, i.e., inferring from a large text corpus that an entity is a member of a class, such as" food" or" artist". The application we are interested in is knowledge base completion, specifically, to learn which classes an entity is a member of. Neural networks (NNs) have shown promising results in different machine learning problems. Distributed representation (embedding) is an effective way of representing data for NNs. In this work, we introduce two models for fine-grained entity typing using NNs with distributed representations of language units: (i) A global model that predicts types of an entity based on its global representation learned from the entity’s name and contexts. (ii) A context model that predicts types of an entity based on its context-level predictions.  Each of the two proposed models has specific properties. For the global model, learning high-quality entity representations is crucial. Therefore, we introduce representations on the three levels of entity, word, and character. We show that each level provides complementary information and a multi-level representation performs best. For the context model, we need to use distant supervision since there are no context-level labels available for entities. Distantly supervised labels are noisy and this harms the performance of models. Therefore, we introduce new algorithms for noise mitigation using multi-instance learning. I will cover the experimental results of these models on a dataset made from Freebase.

Diana McCarthy:
Word Sense Models: From static and discrete to dynamic and continuous
(Mon, Nov 12, 2018)

Traditionally word sense disambiguation models assumed a fixed list of word senses to select from when assigning sense tags to token occurrences in text. This was despite the overwhelming evidence that the meanings of a word depend on the broader contexts (such as time and domain) in which they are spoken or written, and that the boundaries between different meanings are often not clear cut. In this talk I will give an overview of my work, with various collaborators, attempting to address these issues. I will first discuss work to estimate the frequency distributions of word senses from different textual sources then work to detect changes across diachronic corpora. In some of this work we detect such changes with respect to pre-determined sense inventories, while in other work we automatically induce the word senses. One major issue with either approach is that the meanings of a word are often highly related and some words are particularly hard to partition into discrete meanings. I will end the talk with a summary of our work to detect how readily a word can be split into senses and discuss how this might help in producing more realistic models of lexical ambiguity.

Markus Steinbach:
Iconicity in narration. The linguistic meaning of gestures.
(Tue, Nov 13, 2018)

In this talk, I will investigate how sign languages interact with gestures in narration and how iconic gestural aspects of meaning are integrated into the discourse semantic representation of spoken and signed narratives. The analysis will be based on corpus data. In order to account for the complex interaction of gestural and linguistic elements in narration, a modified version of Meir et al.’s (2007) analysis of body as subject and Davidson’s (2015) analysis of role shift in terms of (iconic) demonstration will be developed. One focus will be on quantitative and qualitative differences between sign and spoken languages.

Sven Büchel:
From Sentiment to Emotion: Challenges of a More Fine-Grained Analysis of Affective Language.
(Mon, Nov 26, 2018)

Early work in sentiment analysis focused almost exclusively on the distinction between positive and negative emotion. However, in recent years, a trend towards more sophisticated representations of human affect, often rooted in psychological theory, has emerged. Complex annotation formats, e.g., inspired by the notion of "basic emotions" or "valence and arousal", allow for increased expressiveness. Yet, they also come with higher annotation costs and lower agreement. Even worse, in the absence of a community-wide consensus, the field currently suffers from a proliferation of competing annotation formats resulting in a shortage of training data for each individual format. In this talk, I will discuss the general trend towards more complex representations of emotion in NLP before reporting on our own work. In particular, we introduced a method to convert between popular annotation formats, thus making incompatible datasets compatible again. Moreover, we achieved close-to-human performance for both sentence- and word-level emotion prediction despite heavy data limitations. I will conclude with two application studies from computational social science and the digital humanities, highlighting the merits of emotion over bi-polar sentiment.

Anders Søgard:
Hegel's Holiday: an Argument for a Less Empiricist NLP?
(Thu, Nov 29, 2018)

The "empiricist revolution” in NLP began in the early 1990s and effectively weeded out alternatives from mainstream NLP by the early 2000s. These days experiments with synthetic data, formal lanugages, rule-based models, and evaluation on hand-curated benchmarks are generally discouraged, and experiments are based on inducing from and evaluating on finite random samples, rather than in more controlled set-ups. This anti-thesis to early-days NLP has led to impressive achievements such as Google Translate and Siri, but I will argue that there is - not a road block, but - a bottle neck, ahead, a time of diminishing returns. Hegel, however, seems to be on holiday.

Judith Degen:
On the natural distribution of "some" and "or": consequences for theories of scalar implicature
(Tue, Dec 18, 2018)

Theories of scalar implicature have come a long way by building on introspective judgments and, more recently, judgment and processing data from naive participants in controlled experiments, as primary sources of data. Based on such data, common lore has it that scalar implicatures are Generalized Conversational Implicatures (GCI). Increasingly common lore also has it that scalar implicatures incur a processing cost. In this talk I will argue against both of these generalizations. I will do so by taking into account a source of data that has received remarkably little attention: the natural distribution of scalar items. In particular, I will present two large-scale corpus investigations of the occurrence and interpretation of "some" and "or" in corpora of naturally occurring speech. I will show for both "some" and "or" that their associated scalar inferences are much less likely to occur than commonly assumed and that their probability of occurrence is systematically modulated by syntactic, semantic, and pragmatic features of the context in which they occur. For "or" I will further provide evidence from unsupervised clustering techniques that of the many discourse functions "or" can assume, the one that can give rise to scalar inferences is exceedingly rare. I argue that this work calls into question the status of scalar implicature as GCI and provides evidence for constraint-based accounts of pragmatic inference under which listeners combine multiple probabilistic cues to speaker meaning.