231 research outputs found

    On the ability to inhibit thought and action: General and special theories of an act of control.

    Get PDF
    types: Journal ArticleThis is a postprint of an article published in Journal of Abnormal Psychology © 2014 copyright American Psychological Association. This article may not exactly replicate the final version published in the APA journal. It is not the copy of record. Psychological Review is available online at: http://www.apa.org/pubs/journals/rev/Response inhibition is an important act of control in many domains of psychology and neuroscience. It is often studied in a stop-signal task that requires subjects to inhibit an ongoing action in response to a stop signal. Performance in the stop-signal task is understood as a race between a go process that underlies the action and a stop process that inhibits the action. Responses are inhibited if the stop process finishes before the go process. The finishing time of the stop process is not directly observable; a mathematical model is required to estimate its duration. Logan and Cowan (1984) developed an independent race model that is widely used for this purpose. We present a general race model that extends the independent race model to account for the role of choice in go and stop processes, and a special race model that assumes each runner is a stochastic accumulator governed by a diffusion process. We apply the models to 2 data sets to test assumptions about selective influence of capacity limitations on drift rates and strategies on thresholds, which are largely confirmed. The model provides estimates of distributions of stop-signal response times, which previous models could not estimate. We discuss implications of viewing cognitive control as the result of a repertoire of acts of control tailored to different tasks and situations. (PsycINFO Database Record (c) 2014 APA, all rights reserved)

    The sense of agency as tracking control

    Get PDF
    Does sense of agency (SoA) arise merely from action-outcome associations, or does an additional real-time process track each step along the chain? Tracking control predicts that deviant intermediate steps between action and outcome should reduce SoA. In two experiments, participants learned mappings between two finger actions and two tones. In later test blocks, actions triggered a robot hand moving either the same or a different finger, and also triggered tones, which were congruent or incongruent with the mapping. The perceived delay between actions and tones gave a proxy measure for SoA. Action-tone binding was stronger for congruent than incongruent tones, but only when the robot movement was also congruent. Congruent tones also had reduced N amplitudes, but again only when the robot movement was congruent.We suggest that SoA partly depends on a real time tracking control mechanism, since deviant intermediate action of the robot reduced SoA over the tone

    Reproducibility of preclinical animal research improves with heterogeneity of study samples

    Get PDF
    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research

    Statistical learning leads to persistent memory: evidence for one-year consolidation

    Get PDF
    Statistical learning is a robust mechanism of the brain that enables the extraction of environmental patterns, which is crucial in perceptual and cognitive domains. However, the dynamical change of processes underlying long-term statistical memory formation has not been tested in an appropriately controlled design. Here we show that a memory trace acquired by statistical learning is resistant to inference as well as to forgetting after one year. Participants performed a statistical learning task and were retested one year later without further practice. The acquired statistical knowledge was resistant to interference, since after one year, participants showed similar memory performance on the previously practiced statistical structure after being tested with a new statistical structure. These results could be key to understand the stability of long-term statistical knowledge

    Smart Phone, Smart Science: How the Use of Smartphones Can Revolutionize Research in Cognitive Science

    Get PDF
    Investigating human cognitive faculties such as language, attention, and memory most often relies on testing small and homogeneous groups of volunteers coming to research facilities where they are asked to participate in behavioral experiments. We show that this limitation and sampling bias can be overcome by using smartphone technology to collect data in cognitive science experiments from thousands of subjects from all over the world. This mass coordinated use of smartphones creates a novel and powerful scientific “instrument” that yields the data necessary to test universal theories of cognition. This increase in power represents a potential revolution in cognitive science

    Bayes Factors for Mixed Models: a Discussion

    Get PDF
    van Doorn et al. (2021) outlined various questions that arise when conducting Bayesian model comparison for mixed effects models. Seven response articles offered their own perspective on the preferred setup for mixed model comparison, on the most appropriate specification of prior distributions, and on the desirability of default recommendations. This article presents a round-table discussion that aims to clarify outstanding issues, explore common ground, and outline practical considerations for any researcher wishing to conduct a Bayesian mixed effects model comparison

    Delays without Mistakes: Response Time and Error Distributions in Dual-Task

    Get PDF
    BACKGROUND: When two tasks are presented within a short interval, a delay in the execution of the second task has been systematically observed. Psychological theorizing has argued that while sensory and motor operations can proceed in parallel, the coordination between these modules establishes a processing bottleneck. This model predicts that the timing but not the characteristics (duration, precision, variability...) of each processing stage are affected by interference. Thus, a critical test to this hypothesis is to explore whether the quality of the decision is unaffected by a concurrent task. METHODOLOGY/PRINCIPAL FINDINGS: In number comparison--as in most decision comparison tasks with a scalar measure of the evidence--the extent to which two stimuli can be discriminated is determined by their ratio, referred as the Weber fraction. We investigated performance in a rapid succession of two non-symbolic comparison tasks (number comparison and tone discrimination) in which error rates in both tasks could be manipulated parametrically from chance to almost perfect. We observed that dual-task interference has a massive effect on RT but does not affect the error rates, or the distribution of errors as a function of the evidence. CONCLUSIONS/SIGNIFICANCE: Our results imply that while the decision process itself is delayed during multiple task execution, its workings are unaffected by task interference, providing strong evidence in favor of a sequential model of task execution
    corecore