55 research outputs found

    Echolocation in humans: an overview

    Get PDF
    Bats and dolphins are known for their ability to use echolocation. They emit bursts of sounds and listen to the echoes that bounce back to detect the objects in their environment. What is not as well-known is that some blind people have learned to do the same thing, making mouth clicks, for example, and using the returning echoes from those clicks to sense obstacles and objects of interest in their surroundings. The current review explores some of the research that has examined human echolocation and the changes that have been observed in the brains of echolocation experts. We also discuss potential applications and assistive technology based on echolocation. Blind echolocation experts can sense small differences in the location of objects, differentiate between objects of various sizes and shapes, and even between objects made of different materials, just by listening to the reflected echoes from mouth clicks. It is clear that echolocation may enable some blind people to do things that are otherwise thought to be impossible without vision, potentially providing them with a high degree of independence in their daily lives and demonstrating that echolocation can serve as an effective mobility strategy in the blind. Neuroimaging has shown that the processing of echoes activates brain regions in blind echolocators that would normally support vision in the sighted brain, and that the patterns of these activations are modulated by the information carried by the echoes. This work is shedding new light on just how plastic the human brain is

    FMRI Reveals a Dissociation between Grasping and Perceiving the Size of Real 3D Objects

    Get PDF
    Background Almost 15 years after its formulation, evidence for the neuro-functional dissociation between a dorsal action stream and a ventral perception stream in the human cerebral cortex is still based largely on neuropsychological case studies. To date, there is no unequivocal evidence for separate visual computations of object features for performance of goal-directed actions versus perceptual tasks in the neurologically intact human brain. We used functional magnetic resonance imaging to test explicitly whether or not brain areas mediating size computation for grasping are distinct from those mediating size computation for perception. Methodology/Principal Findings Subjects were presented with the same real graspable 3D objects and were required to perform a number of different tasks: grasping, reaching, size discrimination, pattern discrimination or passive viewing. As in prior studies, the anterior intraparietal area (AIP) in the dorsal stream was more active during grasping, when object size was relevant for planning the grasp, than during reaching, when object properties were irrelevant for movement planning (grasping>reaching). Activity in AIP showed no modulation, however, when size was computed in the context of a purely perceptual task (size = pattern discrimination). Conversely, the lateral occipital (LO) cortex in the ventral stream was modulated when size was computed for perception (size>pattern discrimination) but not for action (grasping = reaching). Conclusions/Significance While areas in both the dorsal and ventral streams responded to the simple presentation of 3D objects (passive viewing), these areas were differentially activated depending on whether the task was grasping or perceptual discrimination, respectively. The demonstration of dual coding of an object for the purposes of action on the one hand and perception on the other in the same healthy brains offers a substantial contribution to the current debate about the nature of the neural coding that takes place in the dorsal and ventral streams

    Shape-specific activation of occipital cortex in an early blind echolocation expert

    Get PDF
    We have previously reported that an early-blind echolocating individual (EB) showed robust occipital activation when he identified distant, silent objects based on echoes from his tongue clicks (Thaler, Arnott, & Goodale, 2011). In the present study we investigated the extent to which echolocation activation in EB's occipital cortex reflected general echolocation processing per se versus feature-specific processing. In the first experiment, echolocation audio sessions were captured with in-ear microphones in an anechoic chamber or hallway alcove as EB produced tongue clicks in front of a concave or flat object covered in aluminum foil or a cotton towel. All eight echolocation sessions (2 shapes×2 surface materials×2 environments) were then randomly presented to him during a sparse-temporal scanning fMRI session. While fMRI contrasts of chamber versus alcove-recorded echolocation stimuli underscored the importance of auditory cortex for extracting echo information, main task comparisons demonstrated a prominent role of occipital cortex in shape-specific echo processing in a manner consistent with latent, multisensory cortical specialization. Specifically, relative to surface composition judgments, shape judgments elicited greater BOLD activity in ventrolateral occipital areas and bilateral occipital pole. A second echolocation experiment involving shape judgments of objects located 20° to the left or right of straight ahead activated more rostral areas of EB's calcarine cortex relative to location judgments of those same objects and, as we previously reported, such calcarine activity was largest when the object was located in contralateral hemispace. Interestingly, other echolocating experts (i.e., a congenitally blind individual in Experiment 1, and a late blind individual in Experiment 2) did not show the same pattern of feature-specific echo-processing calcarine activity as EB, suggesting the possible significance of early visual experience and early echolocation training. Together, our findings indicate that the echolocation activation in EB's occipital cortex is feature-specific, and that these object representations appear to be organized in a topographic manner

    Parahippocampal cortex is involved in material processing via echoes in blind echolocation experts

    Get PDF
    Some blind humans use sound to navigate by emitting mouth-clicks and listening to the echoes that reflect from silent objects and surfaces in their surroundings. These echoes contain information about the size, shape, location, and material properties of objects. Here we present results from an fMRI experiment that investigated the neural activity underlying the processing of materials through echolocation. Three blind echolocation experts (as well as three blind and three sighted non-echolocating control participants) took part in the experiment. First, we made binaural sound recordings in the ears of each echolocator while he produced clicks in the presence of one of three different materials (fleece, synthetic foliage, or whiteboard), or while he made clicks in an empty room. During fMRI scanning these recordings were played back to participants. Remarkably, all participants were able to identify each of the three materials reliably, as well as the empty room. Furthermore, a whole brain analysis, in which we isolated the processing of just the reflected echoes, revealed a material-related increase in BOLD activation in a region of left parahippocampal cortex in the echolocating participants, but not in the blind or sighted control participants. Our results, in combination with previous findings about brain areas involved in material processing, are consistent with the idea that material processing by means of echolocation relies on a multi-modal material processing area in parahippocampal cortex

    Patient DF's visual brain in action : visual feedforward control in patient with visual form agnosia

    Get PDF
    Patient DF, who developed visual form agnosia following ventral-stream damage, is unable to discriminate the width of objects, performing at chance, for example, when asked to open her thumb and forefinger a matching amount. Remarkably, however, DF adjusts her hand aperture to accommodate the width of objects when reaching out to pick them up (grip scaling). While this spared ability to grasp objects is presumed to be mediated by visuomotor modules in her relatively intact dorsal stream, it is possible that it may rely abnormally on online visual or haptic feedback. We report here that DF’s grip scaling remained intact when her vision was completely suppressed during grasp movements, and it still dissociated sharply from her poor perceptual estimates of target size. We then tested whether providing trial-by-trial haptic feedback after making such perceptual estimates might improve DF’s performance, but found that they remained significantly impaired. In a final experiment, we re-examined whether DF’s grip scaling depends on receiving veridical haptic feedback during grasping. In one condition, the haptic feedback was identical to the visual targets, while in a second, the feedback was of a constant intermediate width while the visual target varied trial by trial. Despite such false feedback, DF still scaled her grip aperture to the visual widths of the target blocks, showing only normal adaptation to the false haptically-experienced width. Taken together, these results strengthen the view that DF’s spared grasping relies on a normal mode of dorsal-stream functioning, based chiefly on visual feedforward processing

    DF's visual brain in action: the role of tactile cues

    Get PDF
    Patient DF, an extensively-tested woman with visual form agnosia from ventral-stream damage, is able to scale her grip aperture to match a goal object's geometry when reaching out to pick it up, despite being unable to explicitly distinguish amongst objects on the basis of their different geometries. Using evidence from a range of sources, including functional MRI, we have proposed that she does this through a functionally intact visuomotor system housed within the dorsal stream of the posterior parietal lobe. More recently, however, Schenk (2012a). The Journal of Neuroscience, 32(6), 2013–2017; Schenk (2012b). Trends in Cognitive Sciences, 16(5), 258–259. has argued that DF performs well in visually guided grasping, not through spared and functioning visuomotor networks in the dorsal stream, but because haptic feedback about the locations of the edges of the target is available to calibrate her grasps in such tasks, whereas it is not available in standard visual perceptual tasks. We have tested this 'calibration hypothesis' directly, by presenting DF with a grasping task in which the visible width of a target varied from trial to trial while its actual width remained the same. According to the calibration hypothesis, because haptic feedback was completely uninformative, DF should be unable to calibrate her grip aperture in this task. Contrary to this prediction, we found that DF continued to scale her grip aperture to the visual width of the targets and did so well within the range of healthy controls. We also found that DF's inability to distinguish shapes perceptually is not improved by providing haptic feedback. These findings strengthen the notion that DF’s spared visuomotor abilities are driven largely by visual feedforward processing of the geometric properties of the target. Crucially, these findings also indicate that simple tactile contact with an object is needed for the visuomotor dorsal stream to be engaged, and accordingly enables DF to execute visually guided grasping successfully. This need for actions to have a tangible endpoint provides an important new modification of the Two Visual Systems theory

    Behavioural and neuroimaging evidence for a contribution of color and texture information to scene classification in a patient with visual form agnosia.

    Get PDF
    A common notion is that object perception is a necessary precursor to scene perception. Behavioral evidence suggests, however, that scene perception can operate independently of object perception. Further, neuroimaging has revealed a specialized human cortical area for viewing scenes that is anatomically distinct from areas activated by viewing objects. Here we show that an individual with visual form agnosia, D.F., who has a profound deficit in object recognition but spared color and visual texture perception, could still classify scenes and that she was fastest when the scenes were presented in the appropriate color. When scenes were presented as black-and-white images, she made a large number of errors in classification. Functional magnetic resonance imaging revealed selective activation in the parahippocampal place area (PPA) when D.F. viewed scenes. Unlike control observers, D.F. demonstrated higher activation in the PPA for scenes presented in the appropriate color than for black-and-white versions. The results demonstrate that an individual with profound form vision deficits can still use visual texture and color to classify scenes—and that this intact ability is reflected in differential activation of the PPA with colored versions of scenes

    A Turing-Like Handshake Test for Motor Intelligence

    Full text link
    Abstract. In the Turing test, a computer model is deemed to “think intelligently ” if it can generate answers that are not distinguishable from those of a human. This test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, with the human hand movement being a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human, artificial, or a linear combination of the two). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a forced-choice method and ask which of two systems is more humanlike. By comparing a given model with a weighted sum of human and artificial systems, we fit a psychometric curve to the answers of the interrogator and extract a quantitative measure for the computer model in terms of similarity to the human handshake

    Human Rights and the Pink Tide in Latin America : Which Rights Matter?

    Get PDF
    Latin America witnessed the election of ‘new Left’ governments in the early 21 st century that, in different ways, sought to open a debate about alternatives to paradigms of neoliberal development. What has this meant for the way that human rights are understood and for patterns of human rights compliance? Using qualitative and quantitative evidence, this article discusses how human rights are imagined and the compliance records of new Left governments through the lens of the three ‘generations’ of human rights — political and civil, social and economic, and cultural and environmental rights. The authors draw in particular on evidence from Andean countries and the Southern Cone. While basic civil and individual liberties are still far from guaranteed, especially in the Andean region, new Left countries show better overall performances in relation to socio-economic rights compared to the past and to other Latin American countries. All new Left governments also demonstrate an increasing interest in ‘third generation’ (cultural and environmental) rights, though this is especially marked in the Andean Left. The authors discuss the tensions around interpretations and categories of human rights, reflect on the stagnation of first generation rights and note the difficulties associated with translating second and third generation rights into policy
    • 

    corecore