1,529 research outputs found

    Models of verbal working memory capacity: What does it take to make them work?

    Get PDF
    Theories of working memory (WM) capacity limits will be more useful when we know what aspects of performance are governed by the limits and what aspects are governed by other memory mechanisms. Whereas considerable progress has been made on models of WM capacity limits for visual arrays of separate objects, less progress has been made in understanding verbal materials, especially when words are mentally combined to form multiword units or chunks. Toward a more comprehensive theory of capacity limits, we examined models of forced-choice recognition of words within printed lists, using materials designed to produce multiword chunks in memory (e.g., leather brief case). Several simple models were tested against data from a variety of list lengths and potential chunk sizes, with test conditions that only imperfectly elicited the interword associations. According to the most successful model, participants retained about 3 chunks on average in a capacity-limited region of WM, with some chunks being only subsets of the presented associative information (e.g., leather brief case retained with leather as one chunk and brief case as another). The addition to the model of an activated long-term memory component unlimited in capacity was needed. A fixed-capacity limit appears critical to account for immediate verbal recognition and other forms of WM. We advance a model-based approach that allows capacity to be assessed despite other important processing contributions. Starting with a psychological-process model of WM capacity developed to understand visual arrays, we arrive at a more unified and complete model

    Teaching Bayes' Theorem: strength of evidence as predictive accuracy

    Get PDF
    Although teaching Bayes’ theorem is popular, the standard approach—targeting posterior distributions of parameters—may be improved. We advocate teaching Bayes’ theorem in a ratio form where the posterior beliefs relative to the prior beliefs equals the conditional probability of data relative to the marginal probability of data. This form leads to an interpretation that the strength of evidence is relative predictive accuracy. With this approach, students are encouraged to view Bayes’ theorem as an updating mechanism, to obtain a deeper appreciation of the role of the prior and of marginal data, and to view estimation and model comparison from a unified perspective

    Effects of Violent Video Game Exposure on Aggressive Behavior, Aggressive thought Accessibility, and Aggressive Affect among Adults with and without Autism Spectrum Disorder

    Get PDF
    Recent mass shootings have prompted the idea among some members of the public that exposure to violent video games can have a pronounced effect on individuals with autism spectrum disorder (ASD). Empirical evidence for or against this claim currently is absent. To address this issue, adults with and without ASD were assigned to play a violent or nonviolent version of a customized first-person shooter video game, after which responses on three aggression-related outcome variables (aggressive behavior, aggressive thought accessibility, and aggressive affect) were assessed. Results showed strong evidence that adults with ASD are not differentially affected by acute exposure to violent video games compared to typically developing adults. Moreover, model comparisons showed modest evidence against any effect of violent game content whatsoever. Findings from the current experiment suggest that societal concerns over whether violent game exposure has a unique effect on adults with autism are not supported by evidence

    Using Bayes to get the most out of non-significant results

    Get PDF
    No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory’s predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors

    There Is No Pure Empirical Reasoning

    Get PDF
    The justificatory force of empirical reasoning always depends upon the existence of some synthetic, a priori justification. The reasoner must begin with justified, substantive constraints on both the prior probability of the conclusion and certain conditional probabilities; otherwise, all possible degrees of belief in the conclusion are left open given the premises. Such constraints cannot in general be empirically justified, on pain of infinite regress. Nor does subjective Bayesianism offer a way out for the empiricist. Despite often-cited convergence theorems, subjective Bayesians cannot hold that any empirical hypothesis is ever objectively justified in the relevant sense. Rationalism is thus the only alternative to an implausible skepticism

    Absolute identification by relative judgment

    Get PDF
    In unidimensional absolute identification tasks, participants identify stimuli that vary along a single dimension. Performance is surprisingly poor compared with discrimination of the same stimuli. Existing models assume that identification is achieved using long-term representations of absolute magnitudes. The authors propose an alternative relative judgment model (RJM) in which the elemental perceptual units are representations of the differences between current and previous stimuli. These differences are used, together with the previous feedback, to respond. Without using long-term representations of absolute magnitudes, the RJM accounts for (a) information transmission limits, (b) bowed serial position effects, and (c) sequential effects, where responses are biased toward immediately preceding stimuli but away from more distant stimuli (assimilation and contrast)

    How Bayes factors change scientific practice

    Get PDF
    Bayes factors provide a symmetrical measure of evidence for one model versus another (e.g. H1 versus H0) in order to relate theory to data. These properties help solve some (but not all) of the problems underlying the credibility crisis in psychology. The symmetry of the measure of evidence means that there can be evidence for H0 just as much as for H1; or the Bayes factor may indicate insufficient evidence either way. PP-values cannot make this three-way distinction. Thus, Bayes factors indicate when the data count against a theory (and when they count for nothing); and thus they indicate when replications actually support H0 or H1 (in ways that power cannot). There is every reason to publish evidence supporting the null as going against it, because the evidence can be measured to be just as strong either way (thus the published record can be more balanced). Bayes factors can be BB-hacked but they mitigate the problem because a) they allow evidence in either direction so people will be less tempted to hack in just one direction; b) as a measure of evidence they are insensitive to the stopping rule; c) families of tests cannot be arbitrarily defined; and d) falsely implying a contrast is planned rather than post hoc becomes irrelevant (though the value of pre-registration is not mitigated)

    Using Bayes Factors for testing hypotheses about intervention effectiveness in addictions research

    Get PDF
    Background and aims: It has been proposed that more use should be made of Bayes factors in hypothesis testing in addiction research. Bayes factors are the ratios of the likelihood of a specified hypothesis (e.g. an intervention effect within a given range) to another hypothesis (e.g. no effect). They are particularly important for differentiating lack of strong evidence for an effect and evidence for lack of an effect. This paper reviewed randomized trials reported in Addiction between January and June 2013 to assess how far Bayes factors might improve the interpretation of the data. Methods: Seventy-five effect sizes and their standard errors were extracted from 12 trials. Seventy-three per cent (n = 55) of these were non-significant (i.e. P > 0.05). For each non-significant finding a Bayes factor was calculated using a population effect derived from previous research. In sensitivity analyses, a further two Bayes factors were calculated assuming clinically meaningful and plausible ranges around this population effect. Results: Twenty per cent (n = 11) of the non-significant Bayes factors were 3. The other 76.4% (n = 42) of Bayes factors were between ⅓ and 3. Of these, 26 were in the direction of there being an effect (Bayes factor > 1 and ⅓); and for four there was no evidence either way (Bayes factor = 1). In sensitivity analyses, 13.3% of Bayes Factors were 3, showing good concordance with the main results. Conclusions: Use of Bayes factors when analysing data from randomized trials of interventions in addiction research can provide important information that would lead to more precise conclusions than are obtained typically using currently prevailing methods

    Is There a Free Lunch in Inference?

    Get PDF
    The field of psychology, including cognitive science, is vexed by a crisis of confidence. Although the causes and solutions are varied, we focus here on a common logical problem in inference. The default mode of inference is significance testing, which has a free lunch property where researchers need not make detailed assumptions about the alternative to test the null hypothesis. We present the argument that there is no free lunch; that is, valid testing requires that researchers test the null against a well-specified alternative. We show how this requirement follows from the basic tenets of conventional and Bayesian probability. Moreover, we show in both the conventional and Bayesian framework that not specifying the alternative may lead to rejections of the null hypothesis with scant evidence. We review both frequentist and Bayesian approaches to specifying alternatives, and we show how such specifications improve inference. The field of cognitive science will benefit because consideration of reasonable alternatives will undoubtedly sharpen the intellectual underpinnings of research
    corecore