83 research outputs found

    Effective Mitigation of Anchoring Bias, Projection Bias, and Representativeness Bias from Serious Game-based Training

    Get PDF
    AbstractAlthough human use of heuristics can result in ‘fast and frugal’ decision-making, those prepotent tendencies can also impair our ability to make optimal choices. Previous work had suggested such cognitive biases are resistant to mitigation training. Serious games offer a method to incorporate desirable elements into a training experience, and allow the use of mechanisms that enhance learning and retention. We developed a game to train recognition and mitigation of three forms of cognitive bias: anchoring, a tendency to be inordinately influenced by one piece of information; projection, an implicit assumption that others think or know what you do; and representativeness, judging the likelihood of a hypothesis by how much the available data resembles it. Participants were randomly assigned to play the training game once, twice spaced by 10 to 12 days, or a control condition that used a training video. External questionnaire-based assessments were given immediately post-training and 12 weeks later. Superior training was seen from the game. An independent group using our training game with their own novel bias assessment instruments (to which the researchers and game-developers had no access or content information) validated the key finding. These results demonstrate the viability and high value of using serious computer games to train mitigation of cognitive biases

    Document Representation in Natural Language Text Retrieval

    No full text
    In information retrieval, the content of a document may be represented as a collection of terms: words, stems, phrases, or other units derived or inferred from the text of the document. These terms are usually weighted to indicate their importance within the document which can then be viewed as a vector in a N-dimensional space. In this paper we demonstrate that a proper term weighting is at least as important as their selection, and that dif-ferent types of terms (e.g., words, phrases, names), and terms derived by different means (e.g., statistical, linguistic) must be treated differently for a maximum benefit in rel~ieval. We report some observations made during and after the second Text REtrieval Conference (TREC-2). 1 1

    From Discourse to Logic: A Compositional Approach to Discourse Semantics

    Get PDF
    In this paper, we develop a system of rewriting rules, similar to the Generalized Phrase Structure Grammar and Montague Grammar, that operate directly on fragments of written text transforming it into well-formed expressions of a formal meaning representation language. We consider the task of translating a sentence into a formula of logic as being directly influenced by the context of the surrounding text. The resulting representation captures, besides the logical contents of each proposition, also the various relations in which they remain with respect to one another

    How To Invert A Natural Language Parser Into An Efficient Generator: An Algorithm For Logic Grammars

    No full text
    The use of a single grmnmar in natural language parsing and generation is most desirable for variety of reasons including efficiency, perspicuity, integrity, robusthess, and a certain mount of elegance. In this paper we present an algorithm for automated inversion of a PROLOG-coded unification parser into an efficient unification generator, using the collections of minimal sets of essential arguments (MSEA) for predicates
    corecore