453 research outputs found
Multiverse analyses in the classroom
Multivariate analysis of psychological dat
Representational shifts during category learning
Prototype and exemplar models form two extremes in a class of mixture model accounts of human category learning. This class of models allows flexible representations that can interpolate from simple prototypes to highly differentiated exemplar accounts. We apply one such framework to data that afford an insight into the nature of representational changes during category learning. While generally supporting the notion of a prototype-to-exemplar shift during learning, the detailed analysis suggests that the nature of the changes is considerably more complex than previous work suggests.Wolf Vanpaemel and Daniel J. Navarr
How do I know what my theory predicts?
To get evidence for or against a theory relative to the null hypothesis, one needs to know what the theory predicts. The amount of evidence can then be quantified by a Bayes factor. Specifying the sizes of the effect one’s theory predicts may not come naturally, but I show some ways of thinking about the problem, some simple heuristics that are often useful when one has little relevant prior information. These heuristics include the room-to-move heuristic (for comparing mean differences), the ratio-of-scales heuristic (for regression slopes), the ratio-of-means heuristic (for regression slopes), the basic-effect heuristic (for analysis of variance effects), and the total-effect heuristic (for mediation analysis)
A probabilistic threshold model: Analyzing semantic categorization data with the Rasch model
According to the Threshold Theory (Hampton, 1995, 2007) semantic categorization decisions come about through the placement of a threshold criterion along a dimension that represents items' similarity to the category representation. The adequacy of this theory is assessed by applying a formalization of the theory, known as the Rasch model (Rasch, 1960; Thissen & Steinberg, 1986), to categorization data for eight natural language categories and subjecting it to a formal test. In validating the model special care is given to its ability to account for inter- and intra-individual differences in categorization and their relationship with item typicality. Extensions of the Rasch model that can be used to uncover the nature of category representations and the sources of categorization differences are discussed
Using Bayes to get the most out of non-significant results
No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory’s predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors
Depressive Symptoms and Category Learning: A Preregistered Conceptual Replication Study
We present a fully preregistered, high-powered conceptual replication of Experiment 1 by Smith, Tracy, and Murray (1993). They observed a cognitive deficit in people with elevated depressive symptoms in a task requiring flexible analytic processing and deliberate hypothesis testing, but no deficit in a task assumed to require more automatic, holistic processing. Specifically, they found that individuals with depressive symptoms showed impaired performance on a criterial-attribute classification task, requiring flexible analysis of the attributes and deliberate hypothesis testing, but not on a family-resemblance classification task, assumed to rely on holistic processing. While deficits in tasks requiring flexible hypothesis testing are commonly observed in people diagnosed with a major depressive disorder, these deficits are much less commonly observed in people with merely elevated depressive symptoms, and therefore Smith et al.’s (1993) finding deserves further scrutiny. We observed no deficit in performance on the criterial-attribute task in people with above average depressive symptoms. Rather, we found a similar difference in performance on the criterial-attribute versus family-resemblance task between people with high and low depressive symptoms. The absence of a deficit in people with elevated depressive symptoms is consistent with previous findings focusing on different tasks
[37th] ANNUAL REPORT OF THE FACULTY OF THE COLLEGE OF THE CITY OF NEW YORK TO THE BOARD OF TRUSTEES, FOR THE YEAR ENDING JUNE 21, 1888.
Report fourteen from the sixth bound volume of ten which documents in part the first nineteen years of The Free Academy, the predecessor of the educational institution, City College of New York. COLLEGE OF THE CITY OF NEW YORK. 1856-96. REPORTS OF THE FACULTY II, includes 21 individual reports. At a time when municipal education constituted primary schooling, citizens united in response to arguments presented by a merchant and Board of Education President, Townsend Harris, for the necessity of an institution that would provide advanced training for future generations of citizens to fully engage in the professions advantageous to an expanding urban center. Includes preliminary reports that commented on the application of resources for the creation of the institution and the annual reports of the faculty, demonstrating accountability to the Board of Education with regard to the operation of the facility., [37th] ANNUAL REPORT OF THE FACULTY OF THE COLLEGE OF THE CITY OF NEW YORK TO THE BOARD OF TRUSTEES, FOR THE YEAR ENDING JUNE 21, 1888. [6 pages ([325]-330), 1888], RG
Four reasons to prefer Bayesian analyses over significance testing
Inference using significance testing and Bayes factors is compared and contrasted in five case studies based on real research. The first study illustrates that the methods will often agree, both in motivating researchers to conclude that H1 is supported better than H0, and the other way round, that H0 is better supported than H1. The next four, however, show that the methods will also often disagree. In these cases, the aim of the paper will be to motivate the sensible evidential conclusion, and then see which approach matches those intuitions. Specifically, it is shown that a high-powered non-significant result is consistent with no evidence for H0 over H1 worth mentioning, which a Bayes factor can show, and, conversely, that a low-powered non-significant result is consistent with substantial evidence for H0 over H1, again indicated by Bayesian analyses. The fourth study illustrates that a high-powered significant result may not amount to any evidence for H1 over H0, matching the Bayesian conclusion. Finally, the fifth study illustrates that different theories can be evidentially supported to different degrees by the same data; a fact that P-values cannot reflect but Bayes factors can. It is argued that appropriate conclusions match the Bayesian inferences, but not those based on significance testing, where they disagree
Recommended from our members
Measuring category intuitiveness in unconstrained categorization tasks
What makes a category seem natural or intuitive? In this paper, an unsupervised categorization task was employed to examine observer agreement concerning the categorization of nine different stimulus sets. The stimulus sets were designed to capture different intuitions about classification structure. The main empirical index of category intuitiveness was the frequency of the preferred classification, for different stimulus sets. With 169 participants, and a within participants design, with some stimulus sets the most frequent classification was produced over 50 times and with others not more than two or three times. The main empirical finding was that cluster tightness was more important in determining category intuitiveness, than cluster separation. The results were considered in relation to the following models of unsupervised categorization: DIVA, the rational model, the simplicity model, SUSTAIN, an Unsupervised version of the Generalized Context Model (UGCM), and a simple geometric model based on similarity. DIVA, the geometric approach, SUSTAIN, and the UGCM provided good, though not perfect, fits. Overall, the present work highlights several theoretical and practical issues regarding unsupervised categorization and reveals weaknesses in some of the corresponding formal models
- …