44 research outputs found

    Insight Problem Solving: A Critical Examination of the Possibility of Formal Theory

    Get PDF
    This paper provides a critical examination of the current state and future possibility of formal cognitive theory for insight problem solving and its associated “aha!” experience. Insight problems are contrasted with move problems, which have been formally defined and studied extensively by cognitive psychologists since the pioneering work of Alan Newell and Herbert Simon. To facilitate our discussion, a number of classical brainteasers are presented along with their solutions and some conclusions derived from observing the behavior of many students trying to solve them. Some of these problems are interesting in their own right, and many of them have not been discussed before in the psychological literature. The main purpose of presenting the brainteasers is to assist in discussing the status of formal cognitive theory for insight problem solving, which is argued to be considerably weaker than that found in other areas of higher cognition such as human memory, decision-making, categorization, and perception. We discuss theoretical barriers that have plagued the development of successful formal theory for insight problem solving. A few suggestions are made that might serve to advance the field

    Hierarchical Paired Comparison Modeling, A Cultural Consensus Theory Approach

    No full text
    We introduce a set of models designed to analyze datasets involving responses from multiple subjects on pairwise comparisons from a fixed discrete set of alternatives.These models are part of a greater body of work known as Cultural Consensus Theory (CCT). Like other CCT models, these simultaneously infer each individual's tendency toward aligning with the group consensus, level of agreement on each item, and also a latent consensus value of each alternative. Two primary models are discussed, referred to as the Strong and Weak Consensus Paired-Comparison Models (SCPCM and WCPCM respectively). The SCPCM works under the assumption that all individuals are answering in accordance to the latent consensus values but with varying degrees of accuracy, while the WCPCM relaxes this assumption and assumes minor deviations from latent consensus values in people's average valuation of alternatives. The WCPCM also includes inferences on participants' individual tendencies toward self-consistency (related to their tendencies toward committing violations of transitivity) as well as inferences on the tendency of each item to be evaluated consistently by individuals. The Case III Thurstonian model is used as the backbone for both CPCMs, and inference is conducted under a hierarchical Bayesian framework. Model checks along with applications to both simulated and real data are overviewed

    The statistical analysis of general processing tree models with the EM algorithm

    No full text
    Multinomial processing tree models assume that an observed behavior category can arise from one or more processing sequences represented as branches in a tree. These models form a subclass of parametric, multinomial models, and they provide a substantively motivated alternative to loglinear models. We consider the usual case where branch probabilities are products of nonnegative integer powers in the parameters, 0≤θs≤1, and their complements, 1 - θs. A version of the EM algorithm is constructed that has very strong properties. First, the E-step and the M-step are both analytic and computationally easy; therefore, a fast PC program can be constructed for obtaining MLEs for large numbers of parameters. Second, a closed form expression for the observed Fisher information matrix is obtained for the entire class. Third, it is proved that the algorithm necessarily converges to a local maximum, and this is a stronger result than for the exponential family as a whole. Fourth, we show how the algorithm can handle quite general hypothesis tests concerning restrictions on the model parameters. Fifth, we extend the algorithm to handle the Read and Cressie power divergence family of goodness-of-fit statistics. The paper includes an example to illustrate some of these results. © 1994 The Psychometric Society

    Measuring Memory Factors in Source Monitoring: Reply to Kinchla

    No full text
    Kinchla criticizes Batchelder and Riefer\u27s multinomial model for source monitoring, primarily its high-threshold assumptions, and he advocates an approach based on statistical decision theory (SDT). In this reply, the authors lay out some of the considerations that led to their model and then raise some specific concerns with Kinchla\u27s critique. The authors point out that most of his criticisms are drawn from contrasting the high threshold and the Gaussian, equal-variance SDT models on receiver operating characteristic (ROC) curves for yes-no recognition memory. They indicate how source monitoring is more complicated than yes-no recognition and question the validity of standard ROC analyses in source monitoring. The authors argue that their model is a good approximation for measuring differences between sources on old-new detection and that it has the ability to measure source discrimination as well as detection. The authors also explore a low-threshold multinomial model and discuss the application of SDT models to source monitoring

    A measurement-theoretic analysis of the fuzzy logic model of perception

    No full text
    The fuzzy logic model of perception (FLMP) is analyzed from a measurement-theoretic perspective. FLMP has an impressive history of fitting factorial data, suggesting that its probabilistic form is valid. The authors raise questions about the underlying processing assumptions of FLMP. Although FLMP parameters are interpreted as fuzzy logic truth values, the authors demonstrate that for several factorial designs widely used in choice experiments, most desirable fuzzy truth value properties fail to hold under permissible rescalings, suggesting that the fuzzy logic interpretation may be unwarranted. The authors show that FLMP\u27s choice rule is equivalent to a version of G. Rasch\u27s (1960) item response theory model, and the nature of FLMP measurement scales is transparent when stated in this form. Statistical inference theory exists for the Rasch model and its equivalent forms. In fact, FLMP can be reparameterized as a simple 2-category logit model, thereby facilitating interpretation of its measurement scales and allowing access to commercially available software for performing statistical inference

    Multinomial processing tree models for discrete choice

    No full text
    This paper shows how to develop new multinomial processing tree (MPT) models for discrete choice, and in particular binary choice. First it reviews the history of discrete choice with special attention to Duncan Luce\u27s book Individual Choice Behavior. Luce\u27s choice axiom leads to the Bradley-Terry-Luce (BTL) paired-comparison model which is the basis of logit models of discrete choice used throughout the social and behavioral sciences. It is shown that a reparameterization of the BTL model is represented by choice probabilities generated from a finite state Markov chain, and this representation is closely related to the rooted tree structure of MPT models. New MPT models of binary choice can be obtained by placing restrictions on this representation of the BTL model. Several of these new MPT models for paired comparisons are described, compared to the BTL model, and applied to data from a replicated round-robin data structure. © 2009 Hogrefe Publishing

    Response Strategies in Source Monitoring

    No full text
    This article examines the role that response strategies play in a memory paradigm known as source monitoring. It is argued that several different response biases can interact to confound the interpretation of source-monitoring data. This problem is illustrated with 2 empirical examples, taken from the psychological literature, which examine the role of source monitoring in the generation effect and the picture superiority effect. To resolve this problem, a new multinomial model for source monitoring is presented that is capable of separately measuring memory factors from response-bias factors. The model, when applied to the results of 2 new experiments, results in a clearer picture of which source-monitoring variables are instrumental in the generation effect and picture superiority effect
    corecore