409 research outputs found
The development of path integration: combining estimations of distance and heading
Efficient daily navigation is underpinned by path integration, the mechanism by which we use self-movement information to update our position in space. This process is well-understood in adulthood, but there has been relatively little study of path integration in childhood, leading to an underrepresentation in accounts of navigational development. Previous research has shown that calculation of distance and heading both tend to be less accurate in children as they are in adults, although there have been no studies of the combined calculation of distance and heading that typifies naturalistic path integration. In the present study 5-year-olds and 7-year-olds took part in a triangle-completion task, where they were required to return to the startpoint of a multi-element path using only idiothetic information. Performance was compared to a sample of adult participants, who were found to be more accurate than children on measures of landing error, heading error, and distance error. 7-year-olds were significantly more accurate than 5-year-olds on measures of landing error and heading error, although the difference between groups was much smaller for distance error. All measures were reliably correlated with age, demonstrating a clear development of path integration abilities within the age range tested. Taken together, these data make a strong case for the inclusion of path integration within developmental models of spatial navigational processing
Recommended from our members
Modelling human visual navigation using multi-view scene reconstruction
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision
A nonlinear updating algorithm captures suboptimal inference in the presence of signal-dependent noise
Bayesian models have advanced the idea that humans combine prior beliefs and sensory observations to optimize behavior. How the brain implements Bayes-optimal inference, however, remains poorly understood. Simple behavioral tasks suggest that the brain can flexibly represent probability distributions. An alternative view is that the brain relies on simple algorithms that can implement Bayes-optimal behavior only when the computational demands are low. To distinguish between these alternatives, we devised a task in which Bayes-optimal performance could not be matched by simple algorithms. We asked subjects to estimate and reproduce a time interval by combining prior information with one or two sequential measurements. In the domain of time, measurement noise increases with duration. This property takes the integration of multiple measurements beyond the reach of simple algorithms. We found that subjects were able to update their estimates using the second measurement but their performance was suboptimal, suggesting that they were unable to update full probability distributions. Instead, subjects’ behavior was consistent with an algorithm that predicts upcoming sensory signals, and applies a nonlinear function to errors in prediction to update estimates. These results indicate that the inference strategies employed by humans may deviate from Bayes-optimal integration when the computational demands are high
Recommended from our members
A Rescorla-Wagner Drift-Diffusion Model of Conditioning and Timing
Computational models of classical conditioning have made significant contributions to the theoretic understanding of associative learning, yet they still struggle when the temporal aspects of conditioning are taken into account. Interval timing models have contributed a rich variety of time representations and provided accurate predictions for the timing of responses, but they usually have little to say about associative learning. In this article we present a unified model of conditioning and timing that is based on the influential Rescorla-Wagner conditioning model and the more recently developed Timing Drift-Diffusion model. We test the model by simulating 10 experimental phenomena and show that it can provide an adequate account for 8, and a partial account for the other 2. We argue that the model can account for more phenomena in the chosen set than these other similar in scope models: CSC-TD, MS-TD, Learning to Time and Modular Theory. A comparison and analysis of the mechanisms in these models is provided, with a focus on the types of time representation and associative learning rule used
Frequency-specific hippocampal-prefrontal interactions during associative learning
Much of our knowledge of the world depends on learning associations (for example, face-name), for which the hippocampus (HPC) and prefrontal cortex (PFC) are critical. HPC-PFC interactions have rarely been studied in monkeys, whose cognitive and mnemonic abilities are akin to those of humans. We found functional differences and frequency-specific interactions between HPC and PFC of monkeys learning object pair associations, an animal model of human explicit memory. PFC spiking activity reflected learning in parallel with behavioral performance, whereas HPC neurons reflected feedback about whether trial-and-error guesses were correct or incorrect. Theta-band HPC-PFC synchrony was stronger after errors, was driven primarily by PFC to HPC directional influences and decreased with learning. In contrast, alpha/beta-band synchrony was stronger after correct trials, was driven more by HPC and increased with learning. Rapid object associative learning may occur in PFC, whereas HPC may guide neocortical plasticity by signaling success or failure via oscillatory synchrony in different frequency bands.National Institute of Mental Health (U.S.) (Conte Center Grant P50-MH094263-03)National Institute of Mental Health (U.S.) (Fellowship F32-MH081507)Picower Foundatio
Chinese characters reveal impacts of prior experience on very early stages of perception
Visual perception is strongly determined by accumulated experience with the world, which has been shown for shape, color, and position perception, in the field of visuomotor learning, and in neural computation. In addition, visual perception is tuned to statistics of natural scenes. Such prior experience is modulated by neuronal top-down control the temporal properties of which had been subject to recent studies. Here, we deal with these temporal properties and address the question how early in time accumulated past experience can modulate visual perception
Whisker Movements Reveal Spatial Attention: A Unified Computational Model of Active Sensing Control in the Rat
Spatial attention is most often investigated in the visual modality through measurement of eye movements, with primates, including humans, a widely-studied model. Its study in laboratory rodents, such as mice and rats, requires different techniques, owing to the lack of a visual fovea and the particular ethological relevance of orienting movements of the snout and the whiskers in these animals. In recent years, several reliable relationships have been observed between environmental and behavioural variables and movements of the whiskers, but the function of these responses, as well as how they integrate, remains unclear. Here, we propose a unifying abstract model of whisker movement control that has as its key variable the region of space that is the animal's current focus of attention, and demonstrate, using computer-simulated behavioral experiments, that the model is consistent with a broad range of experimental observations. A core hypothesis is that the rat explicitly decodes the location in space of whisker contacts and that this representation is used to regulate whisker drive signals. This proposition stands in contrast to earlier proposals that the modulation of whisker movement during exploration is mediated primarily by reflex loops. We go on to argue that the superior colliculus is a candidate neural substrate for the siting of a head-centred map guiding whisker movement, in analogy to current models of visual attention. The proposed model has the potential to offer a more complete understanding of whisker control as well as to highlight the potential of the rodent and its whiskers as a tool for the study of mammalian attention
A framework for the first‑person internal sensation of visual perception in mammals and a comparable circuitry for olfactory perception in Drosophila
Perception is a first-person internal sensation induced within the nervous system at the time of arrival of sensory stimuli from objects in the environment. Lack of access to the first-person properties has limited viewing perception as an emergent property and it is currently being studied using third-person observed findings from various levels. One feasible approach to understand its mechanism is to build a hypothesis for the specific conditions and required circuit features of the nodal points where the mechanistic operation of perception take place for one type of sensation in one species and to verify it for the presence of comparable circuit properties for perceiving a different sensation in a different species. The present work explains visual perception in mammalian nervous system from a first-person frame of reference and provides explanations for the homogeneity of perception of visual stimuli above flicker fusion frequency, the perception of objects at locations different from their actual position, the smooth pursuit and saccadic eye movements, the perception of object borders, and perception of pressure phosphenes. Using results from temporal resolution studies and the known details of visual cortical circuitry, explanations are provided for (a) the perception of rapidly changing visual stimuli, (b) how the perception of objects occurs in the correct orientation even though, according to the third-person view, activity from the visual stimulus reaches the cortices in an inverted manner and (c) the functional significance of well-conserved columnar organization of the visual cortex. A comparable circuitry detected in a different nervous system in a remote species-the olfactory circuitry of the fruit fly Drosophila melanogaster-provides an opportunity to explore circuit functions using genetic manipulations, which, along with high-resolution microscopic techniques and lipid membrane interaction studies, will be able to verify the structure-function details of the presented mechanism of perception
How Much Does Effortful Thinking Underlie Observers’ Reactions to Victimization?
From blaming to helping innocent victims, just-world research has revealed that observers react to victimization in a variety of ways. Recent research suggests that such responses to victimization require effortful thought, whereas other research has shown that people can react to these situations intuitively. Along with manipulating just-world threat, across seven experiments, we manipulated or measured participants’ level of mental processing before assessing judgments of victim derogation, blame, willingness to help, and ultimate justice reasoning. The effect of just-world threat on these responses held constant over a range of manipulations/measures, suggesting that the processes involved in maintaining a belief in a just world are not restricted to the rational, deliberative level of mental processing but also occur intuitively
Human spatial representation: what we cannot learn from the studies of rodent navigation
Studies of human and rodent navigation often reveal a remarkable cross-species similarity between the cognitive and neural mechanisms of navigation. Such cross-species resemblance often overshadows some critical differences between how humans and nonhuman animals navigate. In this review, I first argued that a navigation system requires both a storage system (i.e., representing spatial information) and a positioning system (i.e., sensing spatial information) to operate. I then argued that the way humans represent spatial information is different from that inferred from the cellular activity observed during rodent navigation. Such difference spans the whole hierarchy of spatial representation, from representing the structure of environment to the representation of sub-regions of an environment, routes and paths, and the distance and direction relative to a goal location. These cross-species inconsistencies suggested that what we learned from rodent navigation does not always transferable to human navigation. Finally, I argue for closing the loop for the dominant, unidirectional animal-to-human approach in navigation research, so that insights from behavioral studies of human navigation may also flow back to shed light on the cellular mechanisms of navigation for both humans and other mammals (i.e., a human-to-animal approach)
- …
