7 research outputs found

    Decompositional Semantics for Events, Participants, and Scripts in Text

    Get PDF
    This thesis presents a sequence of practical and conceptual developments in decompositional meaning representations for events, participants, and scripts in text under the framework of Universal Decompositional Semantics (UDS) (White et al., 2016a). Part I of the thesis focuses on the semantic representation of individual events and their participants. Chapter 3 examines the feasibility of deriving semantic representations of events from dependency syntax; we demonstrate that predicate- argument structure may be extracted from syntax, but other desirable semantic attributes are not directly discernible. Accordingly, we present in Chapters 4 and 5 state of the art models for predicting these semantic attributes from text. Chapter 4 presents a model for predicting semantic proto-role labels (SPRL), attributes of participants in events based on Dowty’s seminal theory of thematic proto-roles (Dowty, 1991). In Chapter 5 we present a model of event factuality prediction (EFP), the task of determining whether an event mentioned in text happened (according to the meaning of the text). Both chapters include extensive experiments on multi-task learning for improving performance on each semantic prediction task. Taken together, Chapters 3, 4, and 5 represent the development of individual components of a UDS parsing pipeline. In Part II of the thesis, we shift to modeling sequences of events, or scripts (Schank and Abelson, 1977). Chapter 7 presents a case study in script induction using a collection of restaurant narratives from an online blog to learn the canonical “Restaurant Script.” In Chapter 8, we introduce a simple discriminative neural model for script induction based on narrative chains (Chambers and Jurafsky, 2008) that outperforms prior methods. Because much existing work on narrative chains employs semantically impoverished representations of events, Chapter 9 draws on the contributions of Part I to learn narrative chains with semantically rich, decompositional event representations. Finally, in Chapter 10, we observe that corpus based approaches to script induction resemble the task of language modeling. We explore the broader question of the relationship between language modeling and acquisition of common-sense knowledge, and introduce an approach that combines language modeling and light human supervision to construct datasets for common-sense inference

    Modeling Meaning for Description and Interaction

    Get PDF
    Language is a powerful tool for communication and coordination, allowing us to share thoughts, ideas, and instructions with others. Accordingly, enabling people to communicate linguistically with digital agents has been among the longest-standing goals in artificial intelligence (AI). However, unlike humans, machines do not naturally acquire the ability to extract meaning from language. One natural solution to this problem is to represent meaning in a structured format and then develop models for processing language into such structures. Unlike natural language, these structured representations can be directly processed and interpreted by existing algorithms. Indeed, much of the digital infrastructure we have built is mediated by structured representations (e.g. programs and APIs). Furthermore, unlike the internal representations of current neural models, structured representations are built to be used and interpreted by people. I focus on methods for parsing language into these dually-interpretable representations of meaning. I introduce models that learn to predict structure from language and apply them to a variety of tasks, ranging from linguistic description to interaction with robots and digital assistants. I address three thematic challenges in modeling meaning: abstraction, sensitivity, and ambiguity. In order to be useful, meaning representations must abstract away from the linguistic input. Abstractions differ for each representation used, and must be learned by the model. The process of abstraction entails a kind of invariance: different linguistic inputs mapping to the same meaning. At the same time, meaning is sensitive to slight changes in the linguistic input; here, similar inputs might map to very different meanings. Finally, language is often ambiguous, and many utterances have multiple meanings. In cases of ambiguity, models of meaning must learn that the same input can map to different meanings

    The University of Iowa General Catalog 2011-12

    Get PDF

    The University of Iowa General Catalog 2010-11

    Get PDF

    The University of Iowa General Catalog 2009-10

    Get PDF

    Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS'09)

    Get PDF
    The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ..
    corecore