397 research outputs found
Recommended from our members
Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?
In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the “correct” size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues
Differential electrophysiological response during rest, self-referential, and non-self-referential tasks in human posteromedial cortex
The electrophysiological basis for higher brain activity during rest and internally directed cognition within the human default mode network
(DMN) remains largely unknown. Here we use intracranial recordings in
the human posteromedial cortex (PMC), a core node within the DMN,
during conditions of cued rest, autobiographical judgments, and
arithmetic processing. We found a heterogeneous profile of PMC
responses in functional, spatial, and temporal domains. Although the
majority of PMC sites showed increased broad gamma band activity
(30-180 Hz) during rest, some PMC sites, proximal to the retrosplenial
cortex, responded selectively to autobiographical stimuli. However, no
site responded to both conditions, even though they were located within
the boundaries of the DMN identified with resting-state functional
imaging and similarly deactivated during arithmetic processing. These
findings, which provide electrophysiological evidence for heterogeneity
within the core of the DMN, will have important implications for
neuroimaging studies of the DMN
Topological Evolution of Dynamical Networks: Global Criticality from Local Dynamics
We evolve network topology of an asymmetrically connected threshold network
by a simple local rewiring rule: quiet nodes grow links, active nodes lose
links. This leads to convergence of the average connectivity of the network
towards the critical value in the limit of large system size . How
this principle could generate self-organization in natural complex systems is
discussed for two examples: neural networks and regulatory networks in the
genome.Comment: 4 pages RevTeX, 4 figures PostScript, revised versio
Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed
The motor theory of speech perception holds that we perceive the speech of
another in terms of a motor representation of that speech. However, when we
have learned to recognize a foreign accent, it seems plausible that recognition
of a word rarely involves reconstruction of the speech gestures of the speaker
rather than the listener. To better assess the motor theory and this
observation, we proceed in three stages. Part 1 places the motor theory of
speech perception in a larger framework based on our earlier models of the
adaptive formation of mirror neurons for grasping, and for viewing extensions
of that mirror system as part of a larger system for neuro-linguistic
processing, augmented by the present consideration of recognizing speech in a
novel accent. Part 2 then offers a novel computational model of how a listener
comes to understand the speech of someone speaking the listener's native
language with a foreign accent. The core tenet of the model is that the
listener uses hypotheses about the word the speaker is currently uttering to
update probabilities linking the sound produced by the speaker to phonemes in
the native language repertoire of the listener. This, on average, improves the
recognition of later words. This model is neutral regarding the nature of the
representations it uses (motor vs. auditory). It serve as a reference point for
the discussion in Part 3, which proposes a dual-stream neuro-linguistic
architecture to revisits claims for and against the motor theory of speech
perception and the relevance of mirror neurons, and extracts some implications
for the reframing of the motor theory
Recommended from our members
A demonstration of 'broken' visual space
It has long been assumed that there is a distorted mapping between real and ‘perceived’ space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A > B > D yet also A < C < D) and hence no single one-to-one mapping between participants’ perceived space and any real 3D environment. Instead, factors that affect pairwise comparisons of distances dictate participants’ performance. These data contradict, more directly than previous experiments, the idea that the visual system builds and uses a coherent 3D internal representation of a scene
The University of California San Francisco, Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset
The University of California San Francisco Brain Metastases Stereotactic
Radiosurgery (UCSF-BMSR) dataset is a public, clinical, multimodal brain MRI
dataset consisting of 560 brain MRIs from 412 patients with expert annotations
of 5136 brain metastases. Data consists of registered and skull stripped T1
post-contrast, T1 pre-contrast, FLAIR and subtraction (T1 pre-contrast - T1
post-contrast) images and voxelwise segmentations of enhancing brain metastases
in NifTI format. The dataset also includes patient demographics, surgical
status and primary cancer types. The UCSF-BSMR has been made publicly available
in the hopes that researchers will use these data to push the boundaries of AI
applications for brain metastases.Comment: 15 pages, 2 tables, 2 figure
Issues of geologically-focused situational awareness in robotic planetary missions: lessons from an analogue mission at Mistastin Lake impact structure, Labrador, Canada
Remote robotic data provides different information than that obtained from immersion in the field. This significantly affects the geological situational awareness experienced by members of a mission control science team. In order to optimize science return from planetary robotic missions, these limitations must be understood and their effects mitigated to fully leverage the field experience of scientists at mission control.
Results from a 13-day analogue deployment at the Mistastin Lake impact structure in Labrador, Canada suggest that scale, relief, geological detail, and time are intertwined issues that impact the mission control science team‟s effectiveness in interpreting the geology of an area. These issues are evaluated and several mitigation options are suggested. Scale was found to be difficult to interpret without the reference of known objects, even when numerical scale data were available. For this reason, embedding intuitive scale-indicating features into image data is recommended. Since relief is not conveyed in 2D images, both 3D data and observations from
multiple angles are required. Furthermore, the 3D data must be observed in animation or as anaglyphs, since without such assistance much of the relief information in 3D data is not
communicated. Geological detail may also be missed due to the time required to collect, analyze, and request data.
We also suggest that these issues can be addressed, in part, by an improved understanding of the operational time costs and benefits of scientific data collection. Robotic activities operate on inherently slow time-scales. This fact needs to be embraced and accommodated. Instead of focusing too quickly on the details of a target of interest, thereby potentially minimizing science return, time should be allocated at first to more broad data collection at that target, including
preliminary surveys, multiple observations from various vantage points, and progressively smaller scale of focus. This operational model more closely follows techniques employed by
field geologists and is fundamental to the geologic interpretation of an area. Even so, an operational time cost/benefit analyses should be carefully considered in each situation, to determine when such comprehensive data collection would maximize the science return.
Finally, it should be recognized that analogue deployments cannot faithfully model the time scales of robotic planetary missions. Analogue missions are limited by the difficulty and expense of fieldwork. Thus, analogue deployments should focus on smaller aspects of robotic missions and test components in a modular way (e.g., dropping communications constraints, limiting mission scope, focusing on a specific problem, spreading the mission over several field seasons,
etc.)
A Postnatal Critical Period for Orientation Plasticity in the Cat Visual Cortex
Orientation selectivity of primary visual cortical neurons is an important requisite for shape perception. Although numerous studies have been previously devoted to a question of how orientation selectivity is established and elaborated in early life, how the susceptibility of orientation plasticity to visual experience changes in time remains unclear. In the present study, we showed a postnatal sensitive period profile for the modifiability of orientation selectivity in the visual cortex of kittens reared with head-mounted goggles for stable single-orientation exposure. When goggle rearing (GR) started at P16-P30, 2 weeks of GR induced a marked over-representation of the exposed orientation, and 2 more weeks of GR consolidated the altered orientation maps. GR that started later than P50, in turn, induced the under-representation of the exposed orientation. Orientation plasticity in the most sensitive period was markedly suppressed by cortical infusion of NMDAR antagonist. The present study reveals that the plasticity and consolidation of orientation selectivity in an early life are dynamically regulated in an experience-dependent manner
MEG in the macaque monkey and human: distinguishing cortical fields in space and time.
Magnetoencephalography (MEG) is an increasingly popular non-invasive tool used to record, on a millisecond timescale, the magnetic field changes generated by cortical neural activity. MEG has the advantage, over fMRI for example, that it is a direct measure of neural activity. In the current investigation we used MEG to measure cortical responses to tactile and auditory stimuli in the macaque monkey. We had two aims. First, we sought to determine whether MEG, a technique that may have low spatial accuracy, could be used to distinguish the location and organization of sensory cortical fields in macaque monkeys, a species with a relatively small brain compared to that of the human. Second, we wanted to examine the temporal dynamics of cortical responses in the macaque monkey relative to the human. We recorded MEG data from anesthetized monkeys and, for comparison, from awake humans that were presented with simple tactile and auditory stimuli. Neural source reconstruction of MEG data showed that primary somatosensory and auditory cortex could be differentiated and, further, that separate representations of the digit and lip within somatosensory cortex could be identified in macaque monkeys as well as humans. We compared the latencies of activity from monkey and human data for the three stimulation types and proposed a correspondence between the neural responses of the two species. We thus demonstrate the feasibility of using MEG in the macaque monkey and provide a non-human primate model for examining the relationship between external evoked magnetic fields and their underlying neural sources
Rubber Hands Feel Touch, but Not in Blind Individuals
Psychology and neuroscience have a long-standing tradition of studying blind individuals to investigate how visual experience shapes perception of the external world. Here, we study how blind people experience their own body by exposing them to a multisensory body illusion: the somatic rubber hand illusion. In this illusion, healthy blindfolded participants experience that they are touching their own right hand with their left index finger, when in fact they are touching a rubber hand with their left index finger while the experimenter touches their right hand in a synchronized manner (Ehrsson et al. 2005). We compared the strength of this illusion in a group of blind individuals (n = 10), all of whom had experienced severe visual impairment or complete blindness from birth, and a group of age-matched blindfolded sighted participants (n = 12). The illusion was quantified subjectively using questionnaires and behaviorally by asking participants to point to the felt location of the right hand. The results showed that the sighted participants experienced a strong illusion, whereas the blind participants experienced no illusion at all, a difference that was evident in both tests employed. A further experiment testing the participants' basic ability to localize the right hand in space without vision (proprioception) revealed no difference between the two groups. Taken together, these results suggest that blind individuals with impaired visual development have a more veridical percept of self-touch and a less flexible and dynamic representation of their own body in space compared to sighted individuals. We speculate that the multisensory brain systems that re-map somatosensory signals onto external reference frames are less developed in blind individuals and therefore do not allow efficient fusion of tactile and proprioceptive signals from the two upper limbs into a single illusory experience of self-touch as in sighted individuals
- …