104,475 research outputs found

    Accumulation is late and brief in preferential choice

    Get PDF
    Preferential choices are often explained using models within the evidence accumulation framework: value drives the drift rate at which evidence is accumulated until a threshold is reached and an option is chosen. Although rarely stated explicitly, almost all such models assume that decision makers have knowledge at the onset of the choice of all available attributes and options. In reality however, choice information is viewed piece-by-piece, and is often not completely acquired until late in the choice, if at all. Across four eye-tracking experiments, we show that whether the information was acquired early or late is irrelevant in predicting choice: all that matters is whether or not it was acquired at all. Models with potential alternative assumptions were posited and tested, such as 1) accumulation of instantaneously available information or 2) running estimates as information is acquired. These provided poor fits to the data. We are forced to conclude that participants either are clairvoyant, accumulating using information before they have looked at it, or delay accumulating evidence until very late in the choice, so late that the majority of choice time is not time in which evidence is accumulated. Thus, although the evidence accumulation framework may still be useful in measurement models, it cannot account for the details of the processes involved in decision making

    EU accession and Poland's external trade policy

    Get PDF
    No description supplie

    Thinking about Attention in Games: Backward and Forward Induction

    Get PDF
    Behavioral economics improves economic analysis by using psychological regularity to suggest limits on rationality and self-interest (e.g. Camerer and Loewenstein 2003). Expressing these regularities in formal terms permits productive theorizing, suggests new experiments, can contribute to psychology, and can be used to shape economic policies which make normal people better off

    An empirical study evaluating depth of inheritance on the maintainability of object-oriented software

    Get PDF
    This empirical research was undertaken as part of a multi-method programme of research to investigate unsupported claims made of object-oriented technology. A series of subject-based laboratory experiments, including an internal replication, tested the effect of inheritance depth on the maintainability of object-oriented software. Subjects were timed performing identical maintenance tasks on object-oriented software with a hierarchy of three levels of inheritance depth and equivalent object-based software with no inheritance. This was then replicated with more experienced subjects. In a second experiment of similar design, subjects were timed performing identical maintenance tasks on object-oriented software with a hierarchy of five levels of inheritance depth and the equivalent object-based software. The collected data showed that subjects maintaining object-oriented software with three levels of inheritance depth performed the maintenance tasks significantly quicker than those maintaining equivalent object-based software with no inheritance. In contrast, subjects maintaining the object-oriented software with five levels of inheritance depth took longer, on average, than the subjects maintaining the equivalent object-based software (although statistical significance was not obtained). Subjects' source code solutions and debriefing questionnaires provided some evidence suggesting subjects began to experience diffculties with the deeper inheritance hierarchy. It is not at all obvious that object-oriented software is going to be more maintainable in the long run. These findings are sufficiently important that attempts to verify the results should be made by independent researchers

    Introducing a framework to assess newly created questions with Natural Language Processing

    Full text link
    Statistical models such as those derived from Item Response Theory (IRT) enable the assessment of students on a specific subject, which can be useful for several purposes (e.g., learning path customization, drop-out prediction). However, the questions have to be assessed as well and, although it is possible to estimate with IRT the characteristics of questions that have already been answered by several students, this technique cannot be used on newly generated questions. In this paper, we propose a framework to train and evaluate models for estimating the difficulty and discrimination of newly created Multiple Choice Questions by extracting meaningful features from the text of the question and of the possible choices. We implement one model using this framework and test it on a real-world dataset provided by CloudAcademy, showing that it outperforms previously proposed models, reducing by 6.7% the RMSE for difficulty estimation and by 10.8% the RMSE for discrimination estimation. We also present the results of an ablation study performed to support our features choice and to show the effects of different characteristics of the questions' text on difficulty and discrimination.Comment: Accepted at the International Conference of Artificial Intelligence in Educatio
    • …
    corecore