33 research outputs found
What does the mind learn? A comparison of human and machine learning representations
We present a brief review of modern machine learning techniques and their use in models of human mental representations, detailing three notable branches: spatial methods, logical methods and artificial neural networks. Each of these branches contain an extensive set of systems, and demonstrate accurate emulations of human learning of categories, concepts and language, despite substantial differences in operation. We suggest that continued applications will allow cognitive researchers the ability to model the complex real-world problems where machine learning has recently been successful, providing more complete behavioural descriptions. This will however also require careful consideration of appropriate algorithmic constraints alongside these methods in order to find a combination which captures both the strengths and weaknesses of human cognition
A rational approach to stereotype change
Existing theories of stereotype change have often made use of
categorisation principles in order to provide qualitative explanations
for both the revision and maintenance of stereotypical
beliefs. The present paper examines the quantitative methods
underlying these explanations, contrasting both rational and
heuristic models of stereotype change using participant data
and model fits. In a comparison of three models each simulating
existing descriptions of stereotype change, both empirical
data and model fits suggest that stereotypes are updated using
rational categorisation processes. This presents stereotype use
as a more rational behaviour than may commonly be assumed,
and provides new avenues of encouraging stereotype change
according to rational principles
Form and function: assessing the impact of mental representation on behaviour using computational models
This thesis presents three studies examining the methods used by human learners to construct mental representations to reflect external data patterns, and the impact the form of these representations have on subsequent behaviour. This involves three varied tasks in which representations are built and updated from experience: stereotype change, numerical estimation and learning consolidation. Each of these studies uses computational models of these processes to offer potential descriptions of the mechanisms used to construct our representations, and assesses the accuracy of these descriptions using both qualitative and quantitative comparisons with human behaviour. Such contrasts reveal the importance of the form of our mental representations on related actions: stereotypical beliefs are coloured by the organisation of group members, numerical expectations are dependent on the assumed format of numerical information, and stimulus choices are influenced by connections forged through experience. This then provides insight into the mechanisms used by human learners in these tasks, and the specific impacts of such mechanisms on related behaviour. We do however also note questions raised by the use of such methods on the accuracy of what may be highly-complex systems in describing human behaviour, and the algorithms that may be used to implement such systems in real life
Probabilistic biases meet the Bayesian brain
Bayesian cognitive science sees the mind as a spectacular probabilistic inference machine. But Judgment and Decision Making research has spent half a century uncovering how dramatically and systematically people depart from rational norms. This paper outlines recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities; but approximates probabilistic calculations through drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, and offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment
Sampling as a resource-rational constraint
Resource rationality is useful for choosing between models with the same cognitive constraints but cannot settle fundamental disagreements about what those constraints are. We argue that sampling is an especially compelling constraint, as optimizing accumulation of evidence or hypotheses minimizes the cost of time, and there are well-established models for doing so which have had tremendous success explaining human behavior
Perceptual and cognitive judgments show both anchoring and repulsion
One of the most robust effects in cognitive psychology is anchoring: judgments show a bias towards previously viewed values. However, in what is essentially the same task, a perceptual illusion demonstrates the opposite effect of repulsion. Here we unite these two literatures, testing in two experiments with adults (total N=200) whether prior comparative decisions bias cognitive and perceptual judgments in opposing directions, or whether anchoring and repulsion are two domain-general biases whose co-occurrence has so far gone undetected. We find that in both perceptual and cognitive tasks anchoring and repulsion co-occur, with the direction of the bias depending on the comparison value: distant values attract judgments, while nearby values repulse judgments. As none of the leading theories for either effect account for both biases, theoretical integration is needed. As a starting point, we describe one such integration based on sampling models of cognition
The autocorrelated Bayesian sampler : a rational process for probability judgments, estimates, confidence intervals, choices, confidence judgments, and response times
Normative models of decision-making that optimally transform noisy (sensory) information into categorical decisions qualitatively mismatch human behavior. Indeed, leading computational models have only achieved high empirical corroboration by adding task-specific assumptions that deviate from normative principles. In response, we offer a Bayesian approach that implicitly produces a posterior distribution of possible answers (hypotheses) in response to sensory information. But we assume that the brain has no direct access to this posterior, but can only sample hypotheses according to their posterior probabilities. Accordingly, we argue that the primary problem of normative concern in decision-making is integrating stochastic hypotheses, rather than stochastic sensory information, to make categorical decisions. This implies that human response variability arises mainly from posterior sampling rather than sensory noise. Because human hypothesis generation is serially correlated, hypothesis samples will be autocorrelated. Guided by this new problem formulation, we develop a new process, the Autocorrelated Bayesian Sampler (ABS), which grounds autocorrelated hypothesis generation in a sophisticated sampling algorithm. The ABS provides a single mechanism that qualitatively explains many empirical effects of probability judgments, estimates, confidence intervals, choice, confidence judgments, response times, and their relationships. Our analysis demonstrates the unifying power of a perspective shift in the exploration of normative models. It also exemplifies the proposal that the “Bayesian brain” operates using samples not probabilities, and that variability in human behavior may primarily reflect computational rather than sensory noise
Noise in cognition : bug or feature?
Noise in behavior is often viewed as a nuisance: while the mind aims to take the best possible action, it is let down by unreliability in the sensory and response systems. How researchers study cognition reflects this viewpoint – averaging over trials and participants to discover the deterministic relationships between experimental manipulations and their behavioral consequences, with noise represented as additive, often Gaussian, and independent. Yet a careful look at behavioral noise reveals rich structure that defies easy explanation. First, both perceptual and preferential judgments show that sensory and response noise may potentially only play minor roles, with most noise arising in the cognitive computations. Second, the functional form of the noise is both non-Gaussian and non-independent, with the distribution of noise being better characterized as heavy-tailed and as having substantial long-range autocorrelations. It is possible that this structure results from brains that are, for some reason, bedeviled by a fundamental design flaw, albeit one with intriguingly distinctive characteristics. Alternatively, noise might not be a bug but a feature: indeed, we suggest that noise is fundamental to how cognition works. Specifically, we propose that the brain approximates probabilistic inference with a local sampling algorithm, one that uses randomness to drive its exploration of alternative hypotheses. Reframing cognition in this way explains the rich structure of noise and leads to a surprising conclusion: that noise is not a symptom of cognitive malfunction but plays a central role in underpinning human intelligence
Recommended from our members
A Rational Approach to Stereotype Change
Existing theories of stereotype change have often made use ofcategorisation principles in order to provide qualitative expla-nations for both the revision and maintenance of stereotypicalbeliefs. The present paper examines the quantitative methodsunderlying these explanations, contrasting both rational andheuristic models of stereotype change using participant dataand model fits. In a comparison of three models each simulat-ing existing descriptions of stereotype change, both empiricaldata and model fits suggest that stereotypes are updated usingrational categorisation processes. This presents stereotype useas a more rational behaviour than may commonly be assumed,and provides new avenues of encouraging stereotype changeaccording to rational principles
Recommended from our members
Using Occam’s razor and Bayesian modelling to compare discrete and continuousrepresentations in numerostiy judgements
Previous research has suggested that numerosity judgements are based not just on perceptual data but also past experi-ence, and so may be influenced by the form of this stored information. The representation of such experience is unclear,however: numerical data can be represented by either continuous or discrete systems, each predicting different general-isation effects. This study therefore contrasts discrete and continuous prior formats within numerical estimation usingboth direct comparisons of computational models using these representations and empirical contrasts exploiting differentpredicted reactions of these formats to uncertainty via Occam’s razor. Both computational and empirical results indicatethat numeroisty judgements rely on a continuous prior format, mirroring the analogue approximate number system, ornumber sense. This implies a preference for the use of continuous numerical representations even where both stimuli andresponses are discrete, with learners seemingly relying on innate number systems rather than symbolic forms acquired inlater life