987 research outputs found
Conceptual Primitive Decomposition for Knowledge Sharing via Natural Language
Natural language is an ideal mode of interaction and knowledge sharing between intelligent computer systems and their human users. But a major problem that natural language interaction poses is linguistic variation, or the paraphrase problem : there are a variety of ways of referring to the same idea. This is a special problem for intelligent systems in domains such as information retrieval, where a query presented in natural language is matched against an ontology or knowledge base, particularly when its representation uses a vocabulary based in natural language. This paper proposes solutions to these problems in primitive decomposition methods that represent concepts in terms of structures reflecting low-level, embodied human cognition. We argue that this type of representation system engenders richer relations between natural language expressions and knowledge structures, enabling more effective interactive knowledge sharing
Exploring Connections Between Primitive Decomposition of Natural Language and Hierarchical Planning
While recent research has shown that “classical” automated planning systems are effective tools for story generation, the success of automated story understanding systems may require integration between commonsense reasoning and more sophisticated forms of planning to make inferences and deductions about the plans and goals of story actors. Methods that decompose abstractions (i.e., tasks or language expressions) into primitives have played an important role for both automated planning systems and automated story understanding systems, but the two areas have remained largely isolated from each other with few overlaps. We argue that this little-explored connection can benefit both areas of research, and this position paper explores the connections between these systems through the common use of primitive decomposition and its variants. Specifically, we present a prototype of a Hierarchical Task Network planner that decomposes natural language input into primitive structures of Conceptual Dependency, a meaning representation designed for in-depth story understanding. We discuss the important challenges, implications, and applications enabled by the establishment of this unique, direct link between planning and story understanding systems
Towards Modeling Conceptual Dependency Primitives with Image Schema Logic
Conceptual Dependency (CD) primitives and Image Schemas (IS) share a common goal of grounding symbols of natural language in a representation that allows for automated semantic interpretation. Both seek to establish a connection between high-level conceptualizations in natural language and abstract cognitive building blocks. Some previous approaches have established a CD-IS correspondence. In this paper, we build on this correspondence in order to apply a logic designed for image schemas to selected CD primitives with the goal of formally taking account of the CD inventory. The logic draws from Region Connection Calculus (RCC-8), Qualitative Trajectory Calculus (QTC), Cardinal Directions and Linear Temporal Logic (LTL). One of the primary premises of CD is a minimalist approach to its inventory of primitives, that is, it seeks to express natural language contents in an abstract manner with as few primitives as possible. In a formal analysis of physical primitives of CD we found a potential reduction since some primitives can be expressed as special cases of others
The Coherence of Conceptual Primitives
Intelligent systems that process natural language need representations of knowledge to support a human-like thought process, and they often use natural language words or phrases to name and represent concepts in a knowledge base. But some theories of cognition claim that language and thought are not the same thing, and that human thought processes occur at a deeper level of representation than words and phrases in language. In this paper we present results of a human subjects study of language-free primitive decomposition as a representation for commonsense knowledge. We found that our subjects could comprehend and use a primitive decomposition representation; they demonstrated a facile understanding of the physical primitives from Conceptual Dependency, matching them reliably to sentences in ways that agreed with our expectations. Our results also show that the set of conceptual primitives we used resemble real human conceptualizations of natural language in ways that were sharp and coherent. Because our human subjects were recruited using a crowdsourcing platform, we claim that crowdsourcing may provide a vast and inexpensive source of conceptual structures based on primitive decomposition
Crowdsourcing Image Schemas
With their potential to map experiental structures from the sensorimotor to the abstract cognitive realm, image schemas are believed to provide an embodied grounding to our cognitive conceptual system, including natural language. Few empirical studies have evaluated humans’ intuitive understanding of image schemas or the coherence of image-schematic annotations of natural language. In this paper we present the results of a human-subjects study in which 100 participants annotate 12 simple English sentences with one or more image schemas. We find that human subjects recruited from a crowdsourcing platform can understand image schema descriptions and use them to perform annotations of texts, but also that in many cases multiple image schema annotations apply to the same simple sentence, a phenomenon we call image schema collocations. This study carries implications both for methodologies of future studies of image schemas, and for the inexpensive and efficient creation of large text corpora with image schema annotations
Feasibility study of full-reactor gas core demonstration test
Separate studies of nuclear criticality, flow patterns, and thermodynamics for the gas core reactor concept have all given positive indications of its feasibility. However, before serious design for a full scale gas core application can be made, feasibility must be shown for operation with full interaction of the nuclear, thermal, and hydraulic effects. A minimum sized, and hence minimum expense, test arrangement is considered for a full gas core configuration. It is shown that the hydrogen coolant scattering effects dominate the nuclear considerations at elevated temperatures. A cavity diameter of somewhat larger than 4 ft (122 cm) will be needed if temperatures high enough to vaporize uranium are to be achieved
Linguistic Variation and Anomalies in Comparisons of Human and Machine-Generated Image Captions
Describing the content of a visual image is a fundamental ability of human vision and language systems. Over the past several years, researchers have published on major improvements on image captioning, largely due to the development of deep learning systems trained on large data sets of images and human-written captions. However, these systems have major limitations, and their development has been narrowly focused on improving scores on relatively simple “bag-of-words” metrics. Very little work has examined the overall complex patterns of the language produced by image-captioning systems and how it compares to captions written by humans. In this paper, we closely examine patterns in machine-generated captions and characterize how conventional metrics are inconsistent at penalizing them for nonhuman-like erroneous output. We also hypothesize that the complexity of a visual scene should be reflected in the linguistic variety of the captions and, in testing this hypothesis, we find that human-generated captions have a dramatically greater degree of lexical, syntactic, and semantic variation. These results have important implications for the design of performance metrics, gauging what deep learning captioning systems really understand in images, and the importance of the task of image captioning for cognitive systems researc
Novel Primitive Decompositions for Real-World Physical Reasoning
In this work, we are concerned with developing cognitive representations that may en- hance the ability for self-supervised learning systems to learn language as part of their world explorations. We apply insights from in-depth language understanding systems to the problem, specifically representations which decompose language inputs into language-free structures that are complex combinations of primitives representing cognitive abstractions such as object permanence, movement, and spatial relationships. These decompositions, performed by a system traditionally called a conceptual analyzer, link words with complex non-linguistic structures that engender the rich relations between language expressions and world exploration that are a familiar aspect of intelligence.
We focus on improving and extending both the Conceptual Dependency (CD) representation system, its primitive decompositions, and its conceptual analyzer, choosing as our corpus the ProPara (“Process Paragraphs”) dataset, which consists of paragraphs describing biological, chemical, and physical processes of the kind that appear in grade- school science textbooks (e.g., photosynthesis, erosion). In doing so, we avoid the significant challenges of decomposing concepts involving communication, thought, and complex social interactions. To meet the challenges of this dataset, we contribute a mental motion pictures representation system with important innovations, such as using image schemas in place of CD primitives and decoupling containment relationships into separate primitives
A Proposal for Primitive Decomposition of Spatial Orientation Relationships
This short paper continues work on primitive decomposition systems for meaning representation which combine image schemas and conceptual dependency primitive systems. An important thread of this research seeks small abstract sets of conceptual primitives so that decompositions of imagery evoked by language give rise to rich sets of mappings between language and the language-free representations, reflecting the linguistic variation of human language behavior. In this brief paper, we present a proposal for novel primitive decompositions of positions, spatial relationships, and orientations of objects in space in a conceptual representation framework. As an abstract first approximation, we introduce a spatial primitive which represents that one object is positioned in between two other objects, and combine it with part-whole representations to decompose commonly referenced concepts and language expressions of the positions and orientations of objects in relation to their surroundings
Image Schemas and Conceptual Dependency Primitives: A Comparison
A major challenge in natural language understanding research in artificial intelligence (AI) has been and still is the grounding of symbols in a representation that allows for rich semantic interpretation, inference, and deduction. Across cognitive linguistics and other disciplines, a number of principled methods for meaning representation of natural language have been proposed that aim to emulate capacities of human cognition. However, little cross-fertilization among those methods has taken place. A joint effort of human-level meaning representation from AI research and from cognitive linguistics holds the potential of contributing new insights to this profound challenge. To this end, this paper presents a first comparison of image schemas to an AI meaning representation system called Conceptual Dependency (CD). Restricting our study to the domain of physical and spatial conceptual primitives, we find connections and mappings from a set of action primitives in CD to a remarkably similar set of image schemas. We also discuss important implications of this connection, from formalizing image schemas to improving meaning representation systems in AI
- …