192 research outputs found
The Discovery and Interpretation of Evidence Accumulation Stages
To improve the understanding of cognitive processing stages, we combined two prominent traditions in cognitive science: evidence accumulation models and stage discovery methods. While evidence accumulation models have been applied to a wide variety of tasks, they are limited to tasks in which decision-making effects can be attributed to a single processing stage. Here, we propose a new method that first uses machine learning to discover processing stages in EEG data and then applies evidence accumulation models to characterize the duration effects in the identified stages. To evaluate this method, we applied it to a previously published associative recognition task (Application 1) and a previously published random dot motion task with a speed-accuracy trade-off manipulation (Application 2). In both applications, the evidence accumulation models accounted better for the data when we first applied the stage-discovery method, and the resulting parameter estimates where generally in line with psychological theories. In addition, in Application 1 the results shed new light on target-foil effects in associative recognition, while in Application 2 the stage discovery method identified an additional stage in the accuracy-focused condition — challenging standard evidence accumulation accounts. We conclude that the new framework provides a powerful new tool to investigate processing stages
On the necessity of integrating multiple levels of abstraction in a single computational framework
We argue that it is imperative that modelers select the right, and potentially differing levels of abstraction for different components of their computational models, as too global or too specific components will hinder scientific progress. We describe ACT-R, from the perspective that is a useful modeling architecture to support this process, and provide two examples in which mixing different levels of abstraction has provided us with new insights
Conceptually plausible Bayesian inference in interval timing
In a world that is uncertain and noisy, perception makes use of optimization procedures that rely on the statistical properties of previous experiences. A well-known example of this phenomenon is the central tendency effect observed in many psychophysical modalities. For example, in interval timing tasks, previous experiences influence the current percept, pulling behavioural responses towards the mean. In Bayesian observer models, these previous experiences are typically modelled by unimodal statistical distributions, referred to as the prior. Here, we critically assess the validity of the assumptions underlying these models and propose a model that allows for more flexible, yet conceptually more plausible, modelling of empirical distributions. By representing previous experiences as a mixture of lognormal distributions, this model can be parametrized to mimic different unimodal distributions and thus extends previous instantiations of Bayesian observer models. We fit the mixture lognormal model to published interval timing data of healthy young adults and a clinical population of aged mild cognitive impairment patients and age-matched controls, and demonstrate that this model better explains behavioural data and provides new insights into the mechanisms that underlie the behaviour of a memory-affected clinical population
Non-parametric mixture modeling of cognitive psychological data: A new method to disentangle hidden strategies
In a wide variety of cognitive domains, participants have access to several alternative strategies to perform a particular task and, on each trial, one specific strategy is selected and executed. Determining how many strategies are used by a participant as well as their identification at a trial level is a challenging problem for researchers. In the current paper, we propose a new method - the non-parametric mixture model - to efficiently disentangle hidden strategies in cognitive psychological data, based on observed response times. The developed method derived from standard hidden Markov modeling. Importantly, we used a model-free approach where a particular shape of a response time distribution does not need to be assumed. This has the considerable advantage of avoiding potentially unreliable results when an inappropriate response time distribution is assumed. Through three simulation studies and two applications to real data, we repeatedly demonstrated that the non-parametric mixture model is able to reliably recover hidden strategies present in the data as well as to accurately estimate the number of concurrent strategies. The results also showed that this new method is more efficient than a standard parametric approach. The non-parametric mixture model is therefore a useful statistical tool for strategy identification that can be applied in many areas of cognitive psychology. To this end, practical guidelines are provided for researchers wishing to apply the non-parametric mixture models on their own data set
Capturing Dynamic Performance in a Cognitive Model:Estimating ACT-R Memory Parameters With the Linear Ballistic Accumulator
The parameters governing our behavior are in constant flux. Accurately capturing these dynamics in cognitive models poses a challenge to modelers. Here, we demonstrate a mapping of ACT-R's declarative memory onto the linear ballistic accumulator (LBA), a mathematical model describing a competition between evidence accumulation processes. We show that this mapping provides a method for inferring individual ACT-R parameters without requiring the modeler to build and fit an entire ACT-R model. Existing parameter estimation methods for the LBA can be used, instead of the computationally expensive parameter sweeps that are traditionally done. We conduct a parameter recovery study to confirm that the LBA can recover ACT-R parameters from simulated data. Then, as a proof of concept, we use the LBA to estimate ACT-R parameters from an empirical dataset. The resulting parameter estimates provide a cognitively meaningful explanation for observed differences in behavior over time and between individuals. In addition, we find that the mapping between ACT-R and LBA lends a more concrete interpretation to ACT-R's latency factor parameter, namely as a measure of response caution. This work contributes to a growing movement towards integrating formal modeling approaches in cognitive science
The Bayesian Mutation Sampler Explains Distributions of Causal Judgments
One consistent finding in the causal reasoning literature is that causal judgments are rather variable. In particular, distributions of probabilistic causal judgments tend not to be normal and are often not centered on the normative response. As an explanation for these response distributions, we propose that people engage in ‘mutation sampling’ when confronted with a causal query and integrate this information with prior information about that query. The Mutation Sampler model (Davis & Rehder, 2020) posits that we approximate probabilities using a sampling process, explaining the average responses of participants on a wide variety of tasks. Careful analysis, however, shows that its predicted response distributions do not match empirical distributions. We develop the Bayesian Mutation Sampler (BMS) which extends the original model by incorporating the use of generic prior distributions. We fit the BMS to experimental data and find that, in addition to average responses, the BMS explains multiple distributional phenomena including the moderate conservatism of the bulk of responses, the lack of extreme responses, and spikes of responses at 50%
Undesirable biases in NLP: Averting a crisis of measurement
As Natural Language Processing (NLP) technology rapidly develops and spreads
into daily life, it becomes crucial to anticipate how its use could harm
people. However, our ways of assessing the biases of NLP models have not kept
up. While especially the detection of English gender bias in such models has
enjoyed increasing research attention, many of the measures face serious
problems, as it is often unclear what they actually measure and how much they
are subject to measurement error. In this paper, we provide an
interdisciplinary approach to discussing the issue of NLP model bias by
adopting the lens of psychometrics -- a field specialized in the measurement of
concepts like bias that are not directly observable. We pair an introduction of
relevant psychometric concepts with a discussion of how they could be used to
evaluate and improve bias measures. We also argue that adopting psychometric
vocabulary and methodology can make NLP bias research more efficient and
transparent
- …