199 research outputs found
Sequential Bayesian updating for Big Data
The velocity, volume, and variety of big data present both challenges and opportunities for cognitive science. We introduce sequential Bayesian updat-ing as a tool to mine these three core properties. In the Bayesian approach, we summarize the current state of knowledge regarding parameters in terms of their posterior distributions, and use these as prior distributions when new data become available. Crucially, we construct posterior distributions in such a way that we avoid having to repeat computing the likelihood of old data as new data become available, allowing the propagation of information without great computational demand. As a result, these Bayesian methods allow continuous inference on voluminous information streams in a timely manner. We illustrate the advantages of sequential Bayesian updating with data from the MindCrowd project, in which crowd-sourced data are used to study Alzheimer’s Dementia. We fit an extended LATER (Linear Ap-proach to Threshold with Ergodic Rate) model to reaction time data from the project in order to separate two distinct aspects of cognitive functioning: speed of information accumulation and caution
The RWiener Package: an R Package Providing Distribution Functions for the Wiener Diffusion Model
Abstract We present the RWiener package that provides R functions for the Wiener diffusion model. The core of the package are the four distribution functions dwiener, pwiener, qwiener and rwiener, which use up-to-date methods, implemented in C, and provide fast and accurate computation of the density, distribution, and quantile function, as well as a random number generator for the Wiener diffusion model. We used the typical Wiener diffusion model with four parameters: boundary separation, non-decision time, initial bias and drift rate parameter. Beyond the distribution functions, we provide extended likelihood-based functions that can be used for parameter estimation and model selection. The package can be obtained via CRAN
Model Comparison and the Principle of Parsimony
At its core, the study of psychology is concerned with the discovery of plausible explanations for human behavior. For instance, one may observe that “practice makes perfect”: as people become more familiar with a task, they tend to execute it more quickly and with fewer errors. More interesting is the observation that practice tends to improv
A diffusion model account of the relationship between the emotional flanker task and rumination and depression.
Although there exists a consensus that depression is characterized by preferential processing of negative information, empirical findings to support the association between depression and rumination on the one hand and selective attention for negative stimuli on the other hand have been elusive. We argue that one of the reasons for the inconsistent findings may be the use of aggregate measures of response times and accuracies to measure attentional bias. Diffusion model analysis allows to partial out the information processing component from other components that comprise the decision-making process. In this study, we applied a diffusion model to an emotional flanker task. Results revealed that when focusing on a negative target, both rumination and depression were associated with facilitated processing due to negative distracters, whereas only rumination was associated with less interference by positive distracters. After controlling for depression scores, rumination still predicted attentional bias for negative information, but depression scores were no longer predictive after controlling for rumination. Consistent with elusive findings in the literature, we did not find this pattern of results when using accuracy scores or mean response times. Our results suggest that rumination accounts for the attentional bias for negative information found in depression
The EZ diffusion model provides a powerful test of simple empirical effects
Over the last four decades, sequential accumulation models for choice response times have spread through cognitive psychology like wildfire. The most popular style of accumulator model is the diffusion model (Ratcliff Psychological Review, 85, 59–108, 1978), which has been shown to account for data from a wide range of paradigms, including perceptual discrimination, letter identification, lexical decision, recognition memory, and signal detection. Since its original inception, the model has become increasingly complex in order to account for subtle, but reliable, data patterns. The additional complexity of the diffusion model renders it a tool that is only for experts. In response, Wagenmakers et al. (Psychonomic Bulletin & Review, 14, 3–22, 2007) proposed that researchers could use a more basic version of the diffusion model, the EZ diffusion. Here, we simulate experimental effects on data generated from the full diffusion model and compare the power of the full diffusion model and EZ diffusion to detect those effects. We show that the EZ diffusion model, by virtue of its relative simplicity, will be sometimes better able to detect experimental effects than the data–generating full diffusion model
Promoting Handwashing and Sanitation Behaviour Change in Low- and Middle-Income Countries: A Mixed-Method Systematic Review
This systematic review shows which promotional approaches are effective in changing handwashing and sanitation behaviour and which implementation factors affect the success or failure of such interventions. The authors find that promotional approaches can be effective in terms of handwashing with soap, latrine use, safe faeces disposal and open defecation. No one specific approach is most effective. However, several promotional elements do induce behaviour change. Different barriers and facilitators that influence implementing promotional approaches should be carefully considered when developing new policy, programming, practice, or research in this area
An Experimentation Infrastructure for Quantitative Measurements of Cyber Resilience
The vulnerability of cyber-physical systems to cyber attack is well known,
and the requirement to build cyber resilience into these systems has been
firmly established. The key challenge this paper addresses is that maturing
this discipline requires the development of techniques, tools, and processes
for objectively, rigorously, and quantitatively measuring the attributes of
cyber resilience. Researchers and program managers need to be able to determine
if the implementation of a resilience solution actually increases the
resilience of the system. In previous work, a table top exercise was conducted
using a notional heavy vehicle on a fictitious military mission while under a
cyber attack. While this exercise provided some useful data, more and higher
fidelity data is required to refine the measurement methodology. This paper
details the efforts made to construct a cost-effective experimentation
infrastructure to provide such data. It also presents a case study using some
of the data generated by the infrastructure.Comment: 6 pages, 2022 IEEE Military Communications Conference, pp. 855-86
Quantitative Measurement of Cyber Resilience: Modeling and Experimentation
Cyber resilience is the ability of a system to resist and recover from a
cyber attack, thereby restoring the system's functionality. Effective design
and development of a cyber resilient system requires experimental methods and
tools for quantitative measuring of cyber resilience. This paper describes an
experimental method and test bed for obtaining resilience-relevant data as a
system (in our case -- a truck) traverses its route, in repeatable, systematic
experiments. We model a truck equipped with an autonomous cyber-defense system
and which also includes inherent physical resilience features. When attacked by
malware, this ensemble of cyber-physical features (i.e., "bonware") strives to
resist and recover from the performance degradation caused by the malware's
attack. We propose parsimonious mathematical models to aid in quantifying
systems' resilience to cyber attacks. Using the models, we identify
quantitative characteristics obtainable from experimental data, and show that
these characteristics can serve as useful quantitative measures of cyber
resilience.Comment: arXiv admin note: text overlap with arXiv:2302.04413,
arXiv:2302.0794
Metastudies for Robust Tests of Theory
We describe and demonstrate an empirical strategy useful for discovering and replicating empirical effects in psychological science. The method involves the design of a metastudy, in which many independent experimental variables—that may be moderators of an empirical effect—are indiscriminately randomized. Radical randomization yields rich datasets that can be used to test the robustness of an empirical claim to some of the vagaries and idiosyncrasies of experimental protocols and enhances the generalizability of these claims. The strategy is made feasible by advances in hierarchical Bayesian modeling that allow for the pooling of information across unlike experiments and designs and is proposed here as a gold standard for replication research and exploratory research. The practical feasibility of the strategy is demonstrated with a replication of a study on subliminal priming
- …