751 research outputs found
bridgesampling: An R Package for Estimating Normalizing Constants
Statistical procedures such as Bayes factor model selection and Bayesian
model averaging require the computation of normalizing constants (e.g.,
marginal likelihoods). These normalizing constants are notoriously difficult to
obtain, as they usually involve high-dimensional integrals that cannot be
solved analytically. Here we introduce an R package that uses bridge sampling
(Meng & Wong, 1996; Meng & Schilling, 2002) to estimate normalizing constants
in a generic and easy-to-use fashion. For models implemented in Stan, the
estimation procedure is automatic. We illustrate the functionality of the
package with three examples
New normative standards of conditional reasoning and the dual-source model
There has been a major shift in research on human reasoning toward Bayesian and probabilistic approaches, which has been called a new paradigm. The new paradigm sees most everyday and scientific reasoning as taking place in a context of uncertainty, and inference is from uncertain beliefs and not from arbitrary assumptions. In this manuscript we present an empirical test of normative standards in the new paradigm using a novel probabilized conditional reasoning task. Our results indicated that for everyday conditional with at least a weak causal connection between antecedent and consequent only the conditional probability of the consequent given antecedent contributes unique variance to predicting the probability of conditional, but not the probability of the conjunction, nor the probability of the material conditional. Regarding normative accounts of reasoning, we found significant evidence that participants' responses were confidence preserving (i.e., p-valid in the sense of Adams, 1998) for MP inferences, but not for MT inferences. Additionally, only for MP inferences and to a lesser degree for DA inferences did the rate of responses inside the coherence intervals defined by mental probability logic (Pfeifer and Kleiter, 2005, 2010) exceed chance levels. In contrast to the normative accounts, the dual-source model (Klauer et al., 2010) is a descriptive model. It posits that participants integrate their background knowledge (i.e., the type of information primary to the normative approaches) and their subjective probability that a conclusion is seen as warranted based on its logical form. Model fits showed that the dual-source model, which employed participants' responses to a deductive task with abstract contents to estimate the form-based component, provided as good an account of the data as a model that solely used data from the probabilized conditional reasoning task
Forgetting emotional material in working memory
Proactive interference (PI) is the tendency for information learned earlier to interfere with more recently learned information. In the present study, we induced PI by presenting items from the same category over several trials. This results in a build-up of PI and reduces the discriminability of the items in each subsequent trial. We introduced emotional (e.g. disgust) and neutral (e.g. furniture) categories and examined how increasing levels of PI affected performance for both stimulus types. Participants were scanned using functional magnetic resonance imaging (fMRI) performing a 5-item probe recognition task. We modeled responses and corresponding response times with a hierarchical diffusion model. Results showed that PI effects on latent processes (i.e. reduced drift rate) were similar for both stimulus types, but the effect of PI on drift rate was less pronounced PI for emotional compared to neutral stimuli. The decline in the drift rate was accompanied by an increase in neural activation in parahippocampal regions and this relationship was more strongly observed for neutral stimuli compared to emotional stimuli
Algorithmic complexity for psychology: A user-friendly implementation of the coding theorem method
Kolmogorov-Chaitin complexity has long been believed to be impossible to
approximate when it comes to short sequences (e.g. of length 5-50). However,
with the newly developed \emph{coding theorem method} the complexity of strings
of length 2-11 can now be numerically estimated. We present the theoretical
basis of algorithmic complexity for short strings (ACSS) and describe an
R-package providing functions based on ACSS that will cover psychologists'
needs and improve upon previous methods in three ways: (1) ACSS is now
available not only for binary strings, but for strings based on up to 9
different symbols, (2) ACSS no longer requires time-consuming computing, and
(3) a new approach based on ACSS gives access to an estimation of the
complexity of strings of any length. Finally, three illustrative examples show
how these tools can be applied to psychology.Comment: to appear in "Behavioral Research Methods", 14 pages in journal
format, R package at http://cran.r-project.org/web/packages/acss/index.htm
Concerns with the SDT approach to causal conditional reasoning: a comment on Trippas, Handley, Verde, Roser, McNair, and Evans (2014)
Probabilistic conditional reasoning : disentangling form and content with the dual-source model
The present research examines descriptive models of probabilistic conditional reasoning, that is of reasoning from uncertain conditionals with contents about which reasoners have rich background knowledge. According to our dual-source model, two types of information shape such reasoning: knowledge-based information elicited by the contents of the material and content-independent information derived from the form of inferences. Two experiments implemented manipulations that selectively influenced the model parameters for the knowledge-based information, the relative weight given to form-based versus knowledge-based information, and the parameters for the form-based information, validating the psychological interpretation of these parameters. We apply the model to classical suppression effects dissecting them into effects on background knowledge and effects on form-based processes (Exp. 3) and we use it to reanalyse previous studies manipulating reasoning instructions. In a model-comparison exercise, based on data of seven studies, the dual-source model outperformed three Bayesian competitor models. Overall, our results support the view that people make use of background knowledge in line with current Bayesian models, but they also suggest that the form of the conditional argument, irrespective of its content, plays a substantive, yet smaller, role
Gene-Gene Interaction between APOA5 and USF1: Two Candidate Genes for the Metabolic Syndrome
Objective: The metabolic syndrome, a major cluster of risk factors for cardiovascular diseases, shows increasing prevalence worldwide. Several studies have established associations of both apolipoprotein A5 (APOA5) gene variants and upstream stimulatory factor 1 (USF1) gene variants with blood lipid levels and metabolic syndrome. USF1 is a transcription factor for APOA5. Methods: We investigated a possible interaction between these two genes on the risk for the metabolic syndrome, using data from the German population-based KORA survey 4 (1,622 men and women aged 55-74 years). Seven APOA5 single nucleotide polymorphisms (SNPs) were analyzed in combination with six USF1 SNPs, applying logistic regression in an additive model adjusting for age and sex and the definition for metabolic syndrome from the National Cholesterol Education Program's Adult Treatment Panel III (NCEP (AIII)) including medication. Results: The overall prevalence for metabolic syndrome was 41%. Two SNP combinations showed a nominal gene-gene interaction (p values 0.024 and 0.047). The effect of one SNP was modified by the other SNP, with a lower risk for the metabolic syndrome with odds ratios (ORs) between 0.33 (95% CI = 0.13-0.83) and 0.40 (95% CI = 0.15-1.12) when the other SNP was homozygous for the minor allele. Nevertheless, none of the associations remained significant after correction for multiple testing. Conclusion: Thus, there is an indication of an interaction between APOA5 and USF1 on the risk for metabolic syndrome
An introduction to mixed models for experimental psychology
This chapter describes a class of statistical model that is able to account for most of the cases of nonindependence that are typically encountered in psychological experiments, linear mixed-effects models, or mixed models for short. It introduces the concepts underlying mixed models and how they allow accounting for different types of nonindependence that can occur in psychological data. The chapter discusses how to set up a mixed model and how to perform statistical inference with a mixed model. The most important concept for understanding how to estimate and how to interpret mixed models is the distinction between fixed and random effects. One important characteristic of mixed models is that they allow random effects for multiple, possibly independent, random effects grouping factors. Mixed models are a modern class of statistical models that extend regular regression models by including random-effects parameters to account for dependencies among related data points
The intersection between Descriptivism and Meliorism in reasoning research: further proposals in support of 'soft normativism'
The rationality paradox centres on the observation that people are highly intelligent, yet show evidence of errors and biases in their thinking when measured against normative standards. Elqayam and Evans (e.g., 2011) reject normative standards in the psychological study of thinking, reasoning and deciding in favour of a ‘value-free’ descriptive approach to studying high-level cognition. In reviewing Elqayam and Evans’ position, we defend an alternative to descriptivism in the form of ‘soft normativism’, which allows for normative evaluations alongside the pursuit of descriptive research goals. We propose that normative theories have considerable value provided that researchers: (1) are alert to the philosophical quagmire of strong relativism; (2) are mindful of the biases that can arise from utilising normative benchmarks; and (3) engage in a focused analysis of the processing approach adopted by individual reasoners. We address the controversial ‘is–ought’ inference in this context and appeal to a ‘bridging solution’ to this contested inference that is based on the concept of ‘informal reflective equilibrium’. Furthermore, we draw on Elqayam and Evans’ recognition of a role for normative benchmarks in research programmes that are devised to enhance reasoning performance and we argue that such Meliorist research programmes have a valuable reciprocal relationship with descriptivist accounts of reasoning. In sum, we believe that descriptions of reasoning processes are fundamentally enriched by evaluations of reasoning quality, and argue that if such standards are discarded altogether then our explanations and descriptions of reasoning processes are severely undermined
A New Probabilistic Explanation of the Modus Ponens–Modus Tollens Asymmetry
A consistent finding in research on conditional reasoning is
that individuals are more likely to endorse the valid modus ponens (MP) inference than the equally valid modus tollens (MT)
inference. This pattern holds for both abstract task and probabilistic task. The existing explanation for this phenomenon
within a Bayesian framework (e.g., Oaksford & Chater, 2008)
accounts for this asymmetry by assuming separate probability distributions for both MP and MT. We propose a novel
explanation within a computational-level Bayesian account of
reasoning according to which “argumentation is learning”.
We show that the asymmetry must appear for certain prior
probability distributions, under the assumption that the conditional inference provides the agent with new information that
is integrated into the existing knowledge by minimizing the
Kullback-Leibler divergence between the posterior and prior
probability distribution. We also show under which conditions
we would expect the opposite pattern, an MT-MP asymmetr
- …
