29th European Summer School in Logic, Language, and Information
University of Toulouse (France), 17-28 July, 2017

Generative Lexicon Theory: Integrating Theoretical and Empirical Methods

James Pustejovsky, Elisabetta Jezek

Language and Computation (Introductory)

First week, from 11:00 to 12:30


In this tutorial we present an introduction to the current model of Generative Lexicon Theory (GL). The overall aim is to acquaint the student with the basic assumptions and components of the theory and motivate theoretical decisions through evidence-based analysis over large linguistic datasets. We show how the theory models the interaction between lexical information and other components of grammar, in particular how it mediates various problems in the mapping from lexical semantic representations to syntactic forms and, to a lesser extent, to pragmatic interpretation. From a computational perspective, we highlight the applicability of GL to natural language processing tasks such as word sense disambiguation, event participant identification, compound interpretation, and metonymy resolution.

GL theory was conceived from the outset as an infrastructure for a lexically-based semantic theory of language, founded on a rich compositional procedure integrating mechanisms for modulation of word meaning in context. It has won widespread acceptance among linguists and computer scientists of different theoretical backgrounds, and established itself as a productive and typologically adequate paradigm for linguistic research in a wide number of areas, such as event semantics, theory of argument structure, lexical and computational semantics, and Natural Language Processing.

The original full statement of the theory was presented in Pustejovsky (1995), but there have been significant developments since then, including the elaboration of a general theory of semantic selection and semantic typing (Asher and Pustejovsky 2006, Pustejovsky 2011), which have enhanced the explanatory power of the theory and extended its coverage of linguistic phenomena. Moreover, since 2000 the theory has drawn increasingly on the findings of corpus linguistics and distributional semantic analysis and procedures (Pustejovsky and Jezek, 2008, Pustejovsky and Rumshisky, 2008, Jezek and Quochi, 2010, Jezek and Vieu, 2014). This has created a new dimension of evidence-based analysis and interpretation, giving rise to an integration of empirical analysis and theoretical modeling. For all these reasons, an introductory course, illustrating how GL principles can be put into practice in linguistic analysis, will benefit students and researchers interested in both theoretical linguistics and computational semantics.

The plan of the course is as follows. In the first lecture, we review the motivations behind GL and the notion of a distributed compositional model of language meaning. We sketch out the basic assumptions underlying GL theory and justify these assumptions in general terms. For lecture two, we examine the notion of qualia structure and its role in differentiating the semantic micro-structure of word meaning, as well as its role in providing additional strategies for semantic selection in composition. Lecture three first examines recent work on event structure in GL modeling the dynamics of change, and then analyzes the contexts associated with event type shiftings as attested in the corpus. Lecture four focuses on argument distinctions and argument typing, examines default realization strategies for the different types of arguments, and introduces the notion of dynamic argument structure. Finally, in lecture five, we look in detail at GL's compositional mechanisms of selection, coercion, and co-composition. We situate this last lecture in the context of data from large linguistic corpora, and investigate the computational consequences of the GL architecture for modeling compositionality and determining meaning in larger contexts.

There will be labs associated with the lectures, relating to corpus evidence and analytics for qualia relation extraction, compound interpretation, coercion, and event typing.