17 research outputs found

    a methodological approach

    Get PDF
    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research

    Human Echolocation

    No full text
    The use of active natural echolocation as a mobility aid for blind humans has received increased scientific and popular attention in recent years (Engber, 2006; Kreiser, 2006; NPR, 2011), in part due to a focus on several blind individuals who have developed remarkable expertise. However, perhaps surprisingly, the history of empirical human echolocation research is not much younger than the era of echolocation research (cf. Griffin, 1958). Nevertheless, compared to its bat and cetacean counterparts (Thomas et al., 2004), the field today remains in a state of comparative infancy. Until quite recently, nearly the entire body of human echolocation research has been behavioral in nature, with little insight into perceptual and neural mechanisms.Thus, the goal of this manuscript is to broadly integrate research findings in human echolocation across time, levels of analysis, and methodology. We will define human echolocation as it has been operationalized in research and practice, review behavioral goals served by echolocation, and identify putative auditory cues and neural mechanisms underpinning human echolocation. We examine some individual differences in echolocation performance, particularly involving blind compared to sighted persons. We present two studies in detail, addressing the spatial acuity of echolocation skills in sighted volunteers and blind experts. Throughout, we identify outstanding theoretical and applied questions that may form the basis for ongoing and future research. Taken together, we conclude that echolocation can serve behaviorally relevant perceptual goals; that spatial echolocation tasks such as size discrimination can be learned by sighted subjects, not just the blind; that the spatial resolution of echolocation can rival that of peripheral vision; that the variegated cues driving echolocation performance are processed at multiple levels of the auditory system; and that blindness likely plays an important role in shaping individual differences in echo processing

    Crossmodal Transfer of Object Information in Human Echolocation

    No full text
    In active echolocation, reflections from self-generated acoustic pulses are used to represent the external environment. This ability has been described in some blind humans to aid in navigation and obstacle perception[1-4]. Echoic object representation has been described in echolocating bats and dolphins[5,6], but most prior work in humans has focused on navigation or other basic spatial tasks[4,7,8]. Thus, the nature of echoic object information received by human practitioners remains poorly understood. In two match-to-sample experiments, we tested the ability of five experienced blind echolocators to identify objects haptically which they had previously sampled only echoically. In each trial, a target object was presented on a platform and subjects sampled it using echolocation clicks. The target object was then removed and re-presented along with a distractor object. Only tactile sampling was allowed in identifying the target. Subjects were able to identify targets at greater than chance levels among both common household objects (p < .001) and novel objects constructed from plastic blocks (p = .018). While overall accuracy was indicative of high task difficulty, our results suggest that objects sampled by echolocation are recognizable by shape, and that this representation is available across sensory modalities

    Visual experience is not necessary for the development of face-selectivity in the lateral fusiform gyrus

    No full text
    © 2020 National Academy of Sciences. All rights reserved. The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain

    Enabling independent navigation for visually impaired people through a wearable vision-based feedback system

    No full text
    This work introduces a wearable system to provide situational awareness for blind and visually impaired people. The system includes a camera, an embedded computer and a haptic device to provide feedback when an obstacle is detected. The system uses techniques from computer vision and motion planning to (1) identify walkable space; (2) plan step-by-step a safe motion trajectory in the space, and (3) recognize and locate certain types of objects, for example the location of an empty chair. These descriptions are communicated to the person wearing the device through vibrations. We present results from user studies with low- and high-level tasks, including walking through a maze without collisions, locating a chair, and walking through a crowded environment while avoiding peopl

    Data_Sheet_1_Object recognition via echoes: quantifying the crossmodal transfer of three-dimensional shape information between echolocation, vision, and haptics.PDF

    No full text
    Active echolocation allows blind individuals to explore their surroundings via self-generated sounds, similarly to dolphins and other echolocating animals. Echolocators emit sounds, such as finger snaps or mouth clicks, and parse the returning echoes for information about their surroundings, including the location, size, and material composition of objects. Because a crucial function of perceiving objects is to enable effective interaction with them, it is important to understand the degree to which three-dimensional shape information extracted from object echoes is useful in the context of other modalities such as haptics or vision. Here, we investigated the resolution of crossmodal transfer of object-level information between acoustic echoes and other senses. First, in a delayed match-to-sample task, blind expert echolocators and sighted control participants inspected common (everyday) and novel target objects using echolocation, then distinguished the target object from a distractor using only haptic information. For blind participants, discrimination accuracy was overall above chance and similar for both common and novel objects, whereas as a group, sighted participants performed above chance for the common, but not novel objects, suggesting that some coarse object information (a) is available to both expert blind and novice sighted echolocators, (b) transfers from auditory to haptic modalities, and (c) may be facilitated by prior object familiarity and/or material differences, particularly for novice echolocators. Next, to estimate an equivalent resolution in visual terms, we briefly presented blurred images of the novel stimuli to sighted participants (N = 22), who then performed the same haptic discrimination task. We found that visuo-haptic discrimination performance approximately matched echo-haptic discrimination for a Gaussian blur kernel σ of ~2.5°. In this way, by matching visual and echo-based contributions to object discrimination, we can estimate the quality of echoacoustic information that transfers to other sensory modalities, predict theoretical bounds on perception, and inform the design of assistive techniques and technology available for blind individuals.</p

    Table_1_Object recognition via echoes: quantifying the crossmodal transfer of three-dimensional shape information between echolocation, vision, and haptics.DOCX

    No full text
    Active echolocation allows blind individuals to explore their surroundings via self-generated sounds, similarly to dolphins and other echolocating animals. Echolocators emit sounds, such as finger snaps or mouth clicks, and parse the returning echoes for information about their surroundings, including the location, size, and material composition of objects. Because a crucial function of perceiving objects is to enable effective interaction with them, it is important to understand the degree to which three-dimensional shape information extracted from object echoes is useful in the context of other modalities such as haptics or vision. Here, we investigated the resolution of crossmodal transfer of object-level information between acoustic echoes and other senses. First, in a delayed match-to-sample task, blind expert echolocators and sighted control participants inspected common (everyday) and novel target objects using echolocation, then distinguished the target object from a distractor using only haptic information. For blind participants, discrimination accuracy was overall above chance and similar for both common and novel objects, whereas as a group, sighted participants performed above chance for the common, but not novel objects, suggesting that some coarse object information (a) is available to both expert blind and novice sighted echolocators, (b) transfers from auditory to haptic modalities, and (c) may be facilitated by prior object familiarity and/or material differences, particularly for novice echolocators. Next, to estimate an equivalent resolution in visual terms, we briefly presented blurred images of the novel stimuli to sighted participants (N = 22), who then performed the same haptic discrimination task. We found that visuo-haptic discrimination performance approximately matched echo-haptic discrimination for a Gaussian blur kernel σ of ~2.5°. In this way, by matching visual and echo-based contributions to object discrimination, we can estimate the quality of echoacoustic information that transfers to other sensory modalities, predict theoretical bounds on perception, and inform the design of assistive techniques and technology available for blind individuals.</p

    Object recognition via echoes: quantifying the crossmodal transfer of three-dimensional shape information between echolocation, vision, and haptics

    No full text
    Active echolocation allows blind individuals to explore their surroundings via self-generated sounds, similarly to dolphins and other echolocating animals. Echolocators emit sounds, such as finger snaps or mouth clicks, and parse the returning echoes for information about their surroundings, including the location, size, and material composition of objects. Because a crucial function of perceiving objects is to enable effective interaction with them, it is important to understand the degree to which three-dimensional shape information extracted from object echoes is useful in the context of other modalities such as haptics or vision. Here, we investigated the resolution of crossmodal transfer of object-level information between acoustic echoes and other senses. First, in a delayed match-to-sample task, blind expert echolocators and sighted control participants inspected common (everyday) and novel target objects using echolocation, then distinguished the target object from a distractor using only haptic information. For blind participants, discrimination accuracy was overall above chance and similar for both common and novel objects, whereas as a group, sighted participants performed above chance for the common, but not novel objects, suggesting that some coarse object information (a) is available to both expert blind and novice sighted echolocators, (b) transfers from auditory to haptic modalities, and (c) may be facilitated by prior object familiarity and/or material differences, particularly for novice echolocators. Next, to estimate an equivalent resolution in visual terms, we briefly presented blurred images of the novel stimuli to sighted participants (N = 22), who then performed the same haptic discrimination task. We found that visuo-haptic discrimination performance approximately matched echo-haptic discrimination for a Gaussian blur kernel σ of ~2.5°. In this way, by matching visual and echo-based contributions to object discrimination, we can estimate the quality of echoacoustic information that transfers to other sensory modalities, predict theoretical bounds on perception, and inform the design of assistive techniques and technology available for blind individuals
    corecore