50,770 research outputs found

    You Are Only as Good as You Are Behind Closed Doors: The Stability of Virtuous Dispositions

    Get PDF
    Virtues are standardly characterized as stable dispositions. A stable disposition implies that the virtuous actor must be disposed to act well in any domain required of them. For example, a politician is not virtuous if s/he is friendly in debate with an opponent, but hostile at home with a partner or children. Some recent virtue theoretic accounts focus on specific domains in which virtues can be exercised. I call these domain-variant accounts of virtue. This paper examines two such accounts: Randall Curren and Charles Dorn’s (2018) discussion of virtue in the civic sphere, and Michael Brady’s (2018) account of virtues of vulnerability. I argue that being consistent with the standard characterization of virtue requires generalizing beyond a domain. I suggest four actions the authors could take to preserve their accounts while remaining consistent with the standard characterization. I also discuss how virtue education could be enhanced by domain-variant accounts

    Collaborative Epistemic Discourse in Classroom Information Seeking Tasks

    Get PDF
    We discuss the relationship between information seeking, and epistemic beliefs – beliefs about the source, structure, complexity, and stability of knowledge – in the context of collaborative information seeking discourses. We further suggest that both information seeking, and epistemic cognition research agendas have suffered from a lack of attention to how information seeking as a collaborative activity is mediated by talk between partners – an area we seek to address in this paper. A small-scale observational study using sociocultural discourse analysis was conducted with eight eleven year old pupils who carried out search engine tasks in small groups. Qualitative and quantitative analysis were performed on their discussions using sociocultural discourse analytic techniques. Extracts of the dialogue are reported, informed by concordance analysis and quantitative coding of dialogue duration. We find that 1) discourse which could be characterised as ‘epistemic’ is identifiable in student talk, 2) that it is possible to identify talk which is more or less productive, and 3) that epistemic talk is associated with positive learning outcomes

    A Review on Cooperative Question-Answering Systems

    Get PDF
    The Question-Answering (QA) systems fall in the study area of Information Retrieval (IR) and Natural Language Processing (NLP). Given a set of documents, a QA system tries to obtain the correct answer to the questions posed in Natural Language (NL). Normally, the QA systems comprise three main components: question classification, information retrieval and answer extraction. Question classification plays a major role in QA systems since it classifies questions according to the type in their entities. The techniques of information retrieval are used to obtain and to extract relevant answers in the knowledge domain. Finally, the answer extraction component is an emerging topic in the QA systems. This module basically classifies and validates the candidate answers. In this paper we present an overview of the QA systems, focusing on mature work that is related to cooperative systems and that has got as knowledge domain the Semantic Web (SW). Moreover, we also present our proposal of a cooperative QA for the SW

    Crowdsourcing Question-Answer Meaning Representations

    Full text link
    We introduce Question-Answer Meaning Representations (QAMRs), which represent the predicate-argument structure of a sentence as a set of question-answer pairs. We also develop a crowdsourcing scheme to show that QAMRs can be labeled with very little training, and gather a dataset with over 5,000 sentences and 100,000 questions. A detailed qualitative analysis demonstrates that the crowd-generated question-answer pairs cover the vast majority of predicate-argument relationships in existing datasets (including PropBank, NomBank, QA-SRL, and AMR) along with many previously under-resourced ones, including implicit arguments and relations. The QAMR data and annotation code is made publicly available to enable future work on how best to model these complex phenomena.Comment: 8 pages, 6 figures, 2 table

    Discourse structure and information structure : interfaces and prosodic realization

    Get PDF
    In this paper we review the current state of research on the issue of discourse structure (DS) / information structure (IS) interface. This field has received a lot of attention from discourse semanticists and pragmatists, and has made substantial progress in recent years. In this paper we summarize the relevant studies. In addition, we look at the issue of DS/ISinteraction at a different level—that of phonetics. It is known that both information structure and discourse structure can be realized prosodically, but the issue of phonetic interaction between the prosodic devices they employ has hardly ever been discussed in this context. We think that a proper consideration of this aspect of DS/IS-interaction would enrich our understanding of the phenomenon, and hence we formulate some related research-programmatic positions

    Grounding or Guesswork? Large Language Models are Presumptive Grounders

    Full text link
    Effective conversation requires common ground: a shared understanding between the participants. Common ground, however, does not emerge spontaneously in conversation. Speakers and listeners work together to both identify and construct a shared basis while avoiding misunderstanding. To accomplish grounding, humans rely on a range of dialogue acts, like clarification (What do you mean?) and acknowledgment (I understand.). In domains like teaching and emotional support, carefully constructing grounding prevents misunderstanding. However, it is unclear whether large language models (LLMs) leverage these dialogue acts in constructing common ground. To this end, we curate a set of grounding acts and propose corresponding metrics that quantify attempted grounding. We study whether LLMs use these grounding acts, simulating them taking turns from several dialogue datasets, and comparing the results to humans. We find that current LLMs are presumptive grounders, biased towards assuming common ground without using grounding acts. To understand the roots of this behavior, we examine the role of instruction tuning and reinforcement learning with human feedback (RLHF), finding that RLHF leads to less grounding. Altogether, our work highlights the need for more research investigating grounding in human-AI interaction.Comment: 16 pages, 2 figure
    • …
    corecore