161 research outputs found

    Context effects on memory retrieval:Theory and applications

    Get PDF

    Context effects on memory retrieval:Theory and applications

    Get PDF

    The Discovery and Interpretation of Evidence Accumulation Stages

    Get PDF
    To improve the understanding of cognitive processing stages, we combined two prominent traditions in cognitive science: evidence accumulation models and stage discovery methods. While evidence accumulation models have been applied to a wide variety of tasks, they are limited to tasks in which decision-making effects can be attributed to a single processing stage. Here, we propose a new method that first uses machine learning to discover processing stages in EEG data and then applies evidence accumulation models to characterize the duration effects in the identified stages. To evaluate this method, we applied it to a previously published associative recognition task (Application 1) and a previously published random dot motion task with a speed-accuracy trade-off manipulation (Application 2). In both applications, the evidence accumulation models accounted better for the data when we first applied the stage-discovery method, and the resulting parameter estimates where generally in line with psychological theories. In addition, in Application 1 the results shed new light on target-foil effects in associative recognition, while in Application 2 the stage discovery method identified an additional stage in the accuracy-focused condition — challenging standard evidence accumulation accounts. We conclude that the new framework provides a powerful new tool to investigate processing stages

    On the necessity of integrating multiple levels of abstraction in a single computational framework

    Get PDF
    We argue that it is imperative that modelers select the right, and potentially differing levels of abstraction for different components of their computational models, as too global or too specific components will hinder scientific progress. We describe ACT-R, from the perspective that is a useful modeling architecture to support this process, and provide two examples in which mixing different levels of abstraction has provided us with new insights

    Conceptually plausible Bayesian inference in interval timing

    Get PDF
    In a world that is uncertain and noisy, perception makes use of optimization procedures that rely on the statistical properties of previous experiences. A well-known example of this phenomenon is the central tendency effect observed in many psychophysical modalities. For example, in interval timing tasks, previous experiences influence the current percept, pulling behavioural responses towards the mean. In Bayesian observer models, these previous experiences are typically modelled by unimodal statistical distributions, referred to as the prior. Here, we critically assess the validity of the assumptions underlying these models and propose a model that allows for more flexible, yet conceptually more plausible, modelling of empirical distributions. By representing previous experiences as a mixture of lognormal distributions, this model can be parametrized to mimic different unimodal distributions and thus extends previous instantiations of Bayesian observer models. We fit the mixture lognormal model to published interval timing data of healthy young adults and a clinical population of aged mild cognitive impairment patients and age-matched controls, and demonstrate that this model better explains behavioural data and provides new insights into the mechanisms that underlie the behaviour of a memory-affected clinical population

    Capturing Dynamic Performance in a Cognitive Model:Estimating ACT-R Memory Parameters With the Linear Ballistic Accumulator

    Get PDF
    The parameters governing our behavior are in constant flux. Accurately capturing these dynamics in cognitive models poses a challenge to modelers. Here, we demonstrate a mapping of ACT-R's declarative memory onto the linear ballistic accumulator (LBA), a mathematical model describing a competition between evidence accumulation processes. We show that this mapping provides a method for inferring individual ACT-R parameters without requiring the modeler to build and fit an entire ACT-R model. Existing parameter estimation methods for the LBA can be used, instead of the computationally expensive parameter sweeps that are traditionally done. We conduct a parameter recovery study to confirm that the LBA can recover ACT-R parameters from simulated data. Then, as a proof of concept, we use the LBA to estimate ACT-R parameters from an empirical dataset. The resulting parameter estimates provide a cognitively meaningful explanation for observed differences in behavior over time and between individuals. In addition, we find that the mapping between ACT-R and LBA lends a more concrete interpretation to ACT-R's latency factor parameter, namely as a measure of response caution. This work contributes to a growing movement towards integrating formal modeling approaches in cognitive science

    Undesirable biases in NLP: Averting a crisis of measurement

    Get PDF
    As Natural Language Processing (NLP) technology rapidly develops and spreads into daily life, it becomes crucial to anticipate how its use could harm people. However, our ways of assessing the biases of NLP models have not kept up. While especially the detection of English gender bias in such models has enjoyed increasing research attention, many of the measures face serious problems, as it is often unclear what they actually measure and how much they are subject to measurement error. In this paper, we provide an interdisciplinary approach to discussing the issue of NLP model bias by adopting the lens of psychometrics -- a field specialized in the measurement of concepts like bias that are not directly observable. We pair an introduction of relevant psychometric concepts with a discussion of how they could be used to evaluate and improve bias measures. We also argue that adopting psychometric vocabulary and methodology can make NLP bias research more efficient and transparent

    The impact of MRI scanner environment on perceptual decision-making

    Get PDF
    Despite the widespread use of functional magnetic resonance imaging (fMRI), few studies have addressed scanner effects on performance. The studies that have examined this question show a wide variety of results. In this article we report analyses of three experiments in which participants performed a perceptual decision-making task both in a traditional setting as well as inside an MRI scanner. The results consistently show that response times increase inside the scanner. Error rates also increase, but to a lesser extent. To reveal the underlying mechanisms that drive the behavioral changes when performing a task inside the MRI scanner, the data were analyzed using the linear ballistic accumulator model of decision-making. These analyses show that, in the scanner, participants exhibit a slow down of the motor component of the response and have less attentional focus on the task. However, the balance between focus and motor slowing depends on the specific task requirements

    Uncovering the Structure of Semantic Representations Using a Computational Model of Decision‐Making

    Get PDF
    According to logical theories of meaning, a meaning of an expression can be formalized and encoded in truth conditions. Vagueness of the language and individual differences between people are a challenge to incorporate into the meaning representations. In this paper, we propose a new approach to study truth-conditional representations of vague concepts. For a case study, we selected two natural language quantifiers most and more than half. We conducted two online experiments, each with 90 native English speakers. In the first experiment, we tested between-subjects variability in meaning representations. In the second experiment, we tested the stability of meaning representations over time by testing the same group of participants in two experimental sessions. In both experiments, participants performed the verification task. They verified a sentence with a quantifier (e.g., “Most of the gleerbs are feezda.”) based on the numerical information provided in the second sentence, (e.g., “60% of the gleerbs are feezda”). To investigate between-subject and within-subject differences in meaning representations, we proposed an extended version of the Diffusion Decision Model with two parameters capturing truth conditions and vagueness. We fit the model to responses and reaction times data. In the first experiment, we found substantial between-subject differences in representations of most as reflected by the variability in the truth conditions. Moreover, we found that the verification of most is proportion-dependent as reflected in the reaction time effect and model parameter. In the second experiment, we showed that quantifier representations are stable over time as reflected in stable model parameters across two experimental sessions. These findings challenge semantic theories that assume the truth-conditional equivalence of most and more than half and contribute to the representational theory of vague concepts. The current study presents a promising approach to study semantic representations, which can have a wide application in experimental linguistics
    • …
    corecore