305 research outputs found

    Algorithmic complexity for psychology: A user-friendly implementation of the coding theorem method

    Full text link
    Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed \emph{coding theorem method} the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology.Comment: to appear in "Behavioral Research Methods", 14 pages in journal format, R package at http://cran.r-project.org/web/packages/acss/index.htm

    An introduction to mixed models for experimental psychology

    Get PDF
    This chapter describes a class of statistical model that is able to account for most of the cases of nonindependence that are typically encountered in psychological experiments, linear mixed-effects models, or mixed models for short. It introduces the concepts underlying mixed models and how they allow accounting for different types of nonindependence that can occur in psychological data. The chapter discusses how to set up a mixed model and how to perform statistical inference with a mixed model. The most important concept for understanding how to estimate and how to interpret mixed models is the distinction between fixed and random effects. One important characteristic of mixed models is that they allow random effects for multiple, possibly independent, random effects grouping factors. Mixed models are a modern class of statistical models that extend regular regression models by including random-effects parameters to account for dependencies among related data points

    New normative standards of conditional reasoning and the dual-source model

    Get PDF
    There has been a major shift in research on human reasoning toward Bayesian and probabilistic approaches, which has been called a new paradigm. The new paradigm sees most everyday and scientific reasoning as taking place in a context of uncertainty, and inference is from uncertain beliefs and not from arbitrary assumptions. In this manuscript we present an empirical test of normative standards in the new paradigm using a novel probabilized conditional reasoning task. Our results indicated that for everyday conditional with at least a weak causal connection between antecedent and consequent only the conditional probability of the consequent given antecedent contributes unique variance to predicting the probability of conditional, but not the probability of the conjunction, nor the probability of the material conditional. Regarding normative accounts of reasoning, we found significant evidence that participants' responses were confidence preserving (i.e., p-valid in the sense of Adams, 1998) for MP inferences, but not for MT inferences. Additionally, only for MP inferences and to a lesser degree for DA inferences did the rate of responses inside the coherence intervals defined by mental probability logic (Pfeifer and Kleiter, 2005, 2010) exceed chance levels. In contrast to the normative accounts, the dual-source model (Klauer et al., 2010) is a descriptive model. It posits that participants integrate their background knowledge (i.e., the type of information primary to the normative approaches) and their subjective probability that a conclusion is seen as warranted based on its logical form. Model fits showed that the dual-source model, which employed participants' responses to a deductive task with abstract contents to estimate the form-based component, provided as good an account of the data as a model that solely used data from the probabilized conditional reasoning task

    Forgetting emotional material in working memory

    Get PDF
    Proactive interference (PI) is the tendency for information learned earlier to interfere with more recently learned information. In the present study, we induced PI by presenting items from the same category over several trials. This results in a build-up of PI and reduces the discriminability of the items in each subsequent trial. We introduced emotional (e.g. disgust) and neutral (e.g. furniture) categories and examined how increasing levels of PI affected performance for both stimulus types. Participants were scanned using functional magnetic resonance imaging (fMRI) performing a 5-item probe recognition task. We modeled responses and corresponding response times with a hierarchical diffusion model. Results showed that PI effects on latent processes (i.e. reduced drift rate) were similar for both stimulus types, but the effect of PI on drift rate was less pronounced PI for emotional compared to neutral stimuli. The decline in the drift rate was accompanied by an increase in neural activation in parahippocampal regions and this relationship was more strongly observed for neutral stimuli compared to emotional stimuli

    A New Probabilistic Explanation of the Modus Ponens–Modus Tollens Asymmetry

    Get PDF
    A consistent finding in research on conditional reasoning is that individuals are more likely to endorse the valid modus ponens (MP) inference than the equally valid modus tollens (MT) inference. This pattern holds for both abstract task and probabilistic task. The existing explanation for this phenomenon within a Bayesian framework (e.g., Oaksford & Chater, 2008) accounts for this asymmetry by assuming separate probability distributions for both MP and MT. We propose a novel explanation within a computational-level Bayesian account of reasoning according to which “argumentation is learning”. We show that the asymmetry must appear for certain prior probability distributions, under the assumption that the conditional inference provides the agent with new information that is integrated into the existing knowledge by minimizing the Kullback-Leibler divergence between the posterior and prior probability distribution. We also show under which conditions we would expect the opposite pattern, an MT-MP asymmetr

    Bias in Confidence: A Critical Test for Discrete-State Models of Change Detection

    Get PDF
    Ongoing discussions on the nature of storage in visual working memory have mostly focused on 2 theoretical accounts: On one hand we have a discrete-state account, postulating that information in working memory is supported with high fidelity for a limited number of discrete items by a given number of "slots," with no information being retained beyond these. In contrast with this all-or-nothing view, we have a continuous account arguing that information can be degraded in a continuous manner, reflecting the amount of resources dedicated to each item. It turns out that the core tenets of this discrete-state account constrain the way individuals can express confidence in their judgments, excluding the possibility of biased confidence judgments. Importantly, these biased judgments are expected when assuming a continuous degradation of information. We report 2 studies showing that biased confidence judgments can be reliably observed, a behavioral signature that rejects a large number of discrete-state models. Finally, complementary modeling analyses support the notion of a mixture account, according to which memory-based confidence judgments (in contrast with guesses) are based on a comparison between graded, fallible representations, and response criteria

    The effects of refreshing and elaboration on working memory performance, and their contributions to long-term memory formation

    Full text link
    Refreshing and elaboration are cognitive processes assumed to underlie verbal working-memory maintenance and assumed to support long-term memory formation. Whereas refreshing refers to the attentional focussing on representations, elaboration refers to linking representations in working memory into existing semantic networks. We measured the impact of instructed refreshing and elaboration on working and long-term memory separately, and investigated to what extent both processes are distinct in their contributions to working as well as long-term memory. Compared with a no-processing baseline, immediate memory was improved by repeating the items, but not by refreshing them. There was no credible effect of elaboration on working memory, except when items were repeated at the same time. Long-term memory benefited from elaboration, but not from refreshing the words. The results replicate the long-term memory benefit for elaboration, but do not support its beneficial role for working memory. Further, refreshing preserves immediate memory, but does not improve it beyond the level achieved without any processing

    Probabilistic conditional reasoning : disentangling form and content with the dual-source model

    Get PDF
    The present research examines descriptive models of probabilistic conditional reasoning, that is of reasoning from uncertain conditionals with contents about which reasoners have rich background knowledge. According to our dual-source model, two types of information shape such reasoning: knowledge-based information elicited by the contents of the material and content-independent information derived from the form of inferences. Two experiments implemented manipulations that selectively influenced the model parameters for the knowledge-based information, the relative weight given to form-based versus knowledge-based information, and the parameters for the form-based information, validating the psychological interpretation of these parameters. We apply the model to classical suppression effects dissecting them into effects on background knowledge and effects on form-based processes (Exp. 3) and we use it to reanalyse previous studies manipulating reasoning instructions. In a model-comparison exercise, based on data of seven studies, the dual-source model outperformed three Bayesian competitor models. Overall, our results support the view that people make use of background knowledge in line with current Bayesian models, but they also suggest that the form of the conditional argument, irrespective of its content, plays a substantive, yet smaller, role
    • …
    corecore