36,688 research outputs found

    Reference frames in allocentric representations are invariant across static and active encoding

    Get PDF
    An influential model of spatial memory the so-called reference systems account proposes that relationships between objects are biased by salient axes ("frames of reference") provided by environmental cues, such as the geometry of a room. In this study, we sought to examine the extent to which a salient environmental feature influences the formation of spatial memories when learning occurs via a single, static viewpoint and via active navigation, where information has to be integrated across multiple viewpoints. In our study, participants learned the spatial layout of an object array that was arranged with respect to a prominent environmental feature within a virtual arena. Location memory was tested using judgments of relative direction. Experiment 1A employed a design similar to previous studies whereby learning of object-location information occurred from a single, static viewpoint. Consistent with previous studies, spatial judgments were significantly more accurate when made from an orientation that was aligned, as opposed to misaligned, with the salient environmental feature. In Experiment 1B, a fresh group of participants learned the same object-location information through active exploration, which required integration of spatial information over time from a ground-level perspective. As in Experiment 1A, object-location information was organized around the salient environmental cue. Taken together, the findings suggest that the learning condition (static vs. active) does not affect the reference system employed to encode object-location information. Spatial reference systems appear to be a ubiquitous property of spatial representations, and might serve to reduce the cognitive demands of spatial processing

    Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position

    Full text link
    A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309

    Monitoring wild animal communities with arrays of motion sensitive camera traps

    Get PDF
    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a broad range of species providing location -specific information on movement and behavior. Modern digital camera traps that record video present new analytical opportunities, but also new data management challenges. This paper describes our experience with a terrestrial animal monitoring system at Barro Colorado Island, Panama. Our camera network captured the spatio-temporal dynamics of terrestrial bird and mammal activity at the site - data relevant to immediate science questions, and long-term conservation issues. We believe that the experience gained and lessons learned during our year long deployment and testing of the camera traps as well as the developed solutions are applicable to broader sensor network applications and are valuable for the advancement of the sensor network research. We suggest that the continued development of these hardware, software, and analytical tools, in concert, offer an exciting sensor-network solution to monitoring of animal populations which could realistically scale over larger areas and time spans

    Towards general spatial intelligence

    Get PDF
    The goal of General Spatial Intelligence is to present a unified theory to support the various aspects of spatial experience, whether physical or cognitive. We acknowledge the fact that GIScience has to assume a particular worldview, resulting from specific positions regarding metaphysics, ontology, epistemology, mind, language, cognition and representation. Implicit positions regarding these domains may allow solutions to isolated problems but often hamper a more encompassing approach. We argue that explicitly defining a worldview allows the grounding and derivation of multi-modal models, establishing precise problems, allowing falsifiability. We present an example of such a theory founded on process metaphysics, where the ontological elements are called differences. We show that a worldview has implications regarding the nature of space and, in the case of the chosen metaphysical layer, favours a model of space as true spacetime, i.e. four-dimensionality. Finally we illustrate the approach using a scenario from psychology and AI based planning

    Prisoner's Dilemma cellular automata revisited: evolution of cooperation under environmental pressure

    Full text link
    We propose an extension of the evolutionary Prisoner's Dilemma cellular automata, introduced by Nowak and May \cite{nm92}, in which the pressure of the environment is taken into account. This is implemented by requiring that individuals need to collect a minimum score UminU_{min}, representing indispensable resources (nutrients, energy, money, etc.) to prosper in this environment. So the agents, instead of evolving just by adopting the behaviour of the most successful neighbour (who got UmsnU^{msn}), also take into account if UmsnU^{msn} is above or below the threshold UminU_{min}. If Umsn<UminU^{msn}<U_{min} an individual has a probability of adopting the opposite behaviour from the one used by its most successful neighbour. This modification allows the evolution of cooperation for payoffs for which defection was the rule (as it happens, for example, when the sucker's payoff is much worse than the punishment for mutual defection). We also analyse a more sophisticated version of this model in which the selective rule is supplemented with a "win-stay, lose-shift" criterion. The cluster structure is analyzed and, for this more complex version we found power-law scaling for a restricted region in the parameter space.Comment: 15 pages, 8 figures; added figures and revised tex
    • …
    corecore