194,030 research outputs found

    The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants

    Full text link
    Reasoning is a crucial part of natural language argumentation. To comprehend an argument, one must analyze its warrant, which explains why its claim follows from its premises. As arguments are highly contextualized, warrants are usually presupposed and left implicit. Thus, the comprehension does not only require language understanding and logic skills, but also depends on common sense. In this paper we develop a methodology for reconstructing warrants systematically. We operationalize it in a scalable crowdsourcing process, resulting in a freely licensed dataset with warrants for 2k authentic arguments from news comments. On this basis, we present a new challenging task, the argument reasoning comprehension task. Given an argument with a claim and a premise, the goal is to choose the correct implicit warrant from two options. Both warrants are plausible and lexically close, but lead to contradicting claims. A solution to this task will define a substantial step towards automatic warrant reconstruction. However, experiments with several neural attention and language models reveal that current approaches do not suffice.Comment: Accepted as NAACL 2018 Long Paper; see details on the front pag

    Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples

    Full text link
    Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. In this paper, we discuss the indispensable role of knowledge for deeper understanding of content where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.Comment: Pre-print of the paper accepted at 2017 IEEE/WIC/ACM International Conference on Web Intelligence (WI). arXiv admin note: substantial text overlap with arXiv:1610.0770

    Reinventing College Physics for Biologists: Explicating an epistemological curriculum

    Full text link
    The University of Maryland Physics Education Research Group (UMd-PERG) carried out a five-year research project to rethink, observe, and reform introductory algebra-based (college) physics. This class is one of the Maryland Physics Department's large service courses, serving primarily life-science majors. After consultation with biologists, we re-focused the class on helping the students learn to think scientifically -- to build coherence, think in terms of mechanism, and to follow the implications of assumptions. We designed the course to tap into students' productive conceptual and epistemological resources, based on a theoretical framework from research on learning. The reformed class retains its traditional structure in terms of time and instructional personnel, but we modified existing best-practices curricular materials, including Peer Instruction, Interactive Lecture Demonstrations, and Tutorials. We provided class-controlled spaces for student collaboration, which allowed us to observe and record students learning directly. We also scanned all written homework and examinations, and we administered pre-post conceptual and epistemological surveys. The reformed class enhanced the strong gains on pre-post conceptual tests produced by the best-practices materials while obtaining unprecedented pre-post gains on epistemological surveys instead of the traditional losses.Comment: 35 pages including a 15 page appendix of supplementary material

    The Knowledge Level in Cognitive Architectures: Current Limitations and Possible Developments

    Get PDF
    In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge. We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build articial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of the comparison of the CAs' knowledge representation and processing mechanisms with those executed by humans in their everyday activities. In the final part of the paper further directions of research will be explored, trying to address current limitations and future challenges

    Hearing meanings: the revenge of context

    Get PDF
    According to the perceptual view of language comprehension, listeners typically recover high-level linguistic properties such as utterance meaning without inferential work. The perceptual view is subject to the Objection from Context: since utterance meaning is massively context-sensitive, and context-sensitivity requires cognitive inference, the perceptual view is false. In recent work, Berit Brogaard provides a challenging reply to this objection. She argues that in language comprehension context-sensitivity is typically exercised not through inferences, but rather through top-down perceptual modulations or perceptual learning. This paper provides a complete formulation of the Objection from Context and evaluates Brogaards reply to it. Drawing on conceptual considerations and empirical examples, we argue that the exercise of context-sensitivity in language comprehension does, in fact, typically involve inference
    corecore