5 research outputs found

    Sensory augmentation:Integration of an auditory compass signal into human perception of space

    Get PDF
    Bio-mimetic approaches to restoring sensory function show great promise in that they rapidly produce perceptual experience, but have the disadvantage of being invasive. In contrast, sensory substitution approaches are non-invasive, but may lead to cognitive rather than perceptual experience. Here we introduce a new non-invasive approach that leads to fast and truly perceptual experience like bio-mimetic techniques. Instead of building on existing circuits at the neural level as done in bio-mimetics, we piggy-back on sensorimotor contingencies at the stimulus level. We convey head orientation to geomagnetic North, a reliable spatial relation not normally sensed by humans, by mimicking sensorimotor contingencies of distal sounds via head-related transfer functions. We demonstrate rapid and long-lasting integration into the perception of self-rotation. Short training with amplified or reduced rotation gain in the magnetic signal can expand or compress the perceived extent of vestibular self-rotation, even with the magnetic signal absent in the test. We argue that it is the reliability of the magnetic signal that allows vestibular spatial recalibration, and the coding scheme mimicking sensorimotor contingencies of distal sounds that permits fast integration. Hence we propose that contingency-mimetic feedback has great potential for creating sensory augmentation devices that achieve fast and genuinely perceptual experiences

    On the use of the auditory pathway to represent image scenes in real-time

    No full text
    The See Color interface transforms a small portion of a colored video image into sound sources represented by spatialized musical instruments. Basically, the conversion of colors into sounds is achieved by quantization of the HSL color system. Our purpose is to provide visually impaired individuals with a capability of perception of the environment in real time. In this work we present the system principles of design and several experiments that have been carried out by several blindfolded persons with See ColOr prototypes related to static pictures on a tablet and simple video images. The goal of the first experiment was to identify the colors of static pictures' main features and then to interpret the image scenes. Although learning all instrument sounds in only a training session was too difficult, participants found that colors were helpful to limit the possible image interpretations. The experiments on the analysis of static pictures suggested that the order of magnitude of the slow down factor related to the use of the auditory channel, instead of the visual channel could correspond to the order of magnitude related to the ratio of visual channel capacity to auditory channel capacity. Afterwards, two experiments based on a head mounted camera have been performed. The first experiment pertaining to object manipulation is based on the pairing of colored socks, while the second experiment is related to outdoor navigation with the goal of following a colored serpentine painted on the ground. The “socks” experiment demonstrated that blindfolded individuals were able to accurately match pairs of colored socks. The same participants with the addition of a blind individual successfully followed a red serpentine painted on the ground for more than 80 m. According to task time durations, the order of magnitude of the slow down factor related to the “socks” and “serpentine” experiments could be equal to one. From a cognitive perspective this would be consistent with the fact that these two tasks are simpler than the interpretation of image scenes

    Vision artificielle pour les non-voyants : une approche bio-inspirée pour la reconnaissance de formes

    Get PDF
    More than 315 million people worldwide suffer from visual impairments, with several studies suggesting that this number will double by 2030 due to the ageing of the population. To compensate for the loss of sight the current approaches consist of either specific aids designed to answer particular needs or generic systems such as neuroprostheses and sensory substitution devices. These holistic approaches, which try to restore vision as a whole, have been shown to be very inefficient in real life situations given the low resolution of output interfaces. To overcome these obstacles we propose the use of artificial vision in order to pre-process visual scenes and provide the user with relevant information. We have validated this approach through the development of a novel assistive device for the blind called Navig. Through shape recognition and spatialized sounds synthesis, this system allows users to locate and grab objects of interest. It also features navigational aids based on a new positioning method combining GPS, inertial sensors and the visual detection of geolocalized landmarks. To enhance the performance of the visual module we further developed, as part of this thesis, a bio-inspired pattern recognition algorithm which uses latency-based coding of visual information, oriented edge representations and a cascaded architecture combining detection at different resolutions.La dĂ©ficience visuelle touche aujourd’hui plus de 315 millions de personnes Ă  travers le monde, un chiffre qui pourrait doubler d’ici Ă  2030 du fait du vieillissement de la population. Les deux grandes approches existantes pour compenser la perte de vision sont les aides spĂ©cifiques, rĂ©pondant Ă  un besoin identifiĂ©, et les systĂšmes gĂ©nĂ©riques tels que les neuroprothĂšses ou les systĂšmes de substitution sensorielle. Ces approches holistiques, tentant de restituer l’ensemble de l’information visuelle, s’avĂšrent inadaptĂ©es de par la trop faible rĂ©solution des interfaces de sortie, rendant ces systĂšmes inutilisables dans la vie quotidienne. Face Ă  ce constat, nous proposons dans cette thĂšse une dĂ©marche alternative, consistant Ă  intĂ©grer des mĂ©thodes de vision artificielle, afin de prĂ©traiter la scĂšne visuelle, et de ne restituer au non-voyant que les informations extraites pertinentes. Pour valider cette approche, nous prĂ©senterons le dĂ©veloppement d’un systĂšme de supplĂ©ance baptisĂ© Navig. GrĂące Ă  la reconnaissance de formes et Ă  la synthĂšse de sons spatialisĂ©s, il permet Ă  l’utilisateur de localiser des objets d’intĂ©rĂȘt. Il offre Ă©galement des fonctions de navigation, basĂ©es sur une nouvelle mĂ©thode de positionnement combinant GPS, donnĂ©es inertielles, et dĂ©tections de cibles visuelles gĂ©olocalisĂ©es. Afin d’amĂ©liorer les performances du module de vision artificielle, nous proposerons Ă©galement dans cette thĂšse un nouvel algorithme de reconnaissance de formes bio-inspirĂ©, reposant sur un codage de l’information visuelle par latence, sur des reprĂ©sentations sous forme d’arĂȘtes orientĂ©es, et sur une architecture en cascade combinant des dĂ©tections Ă  diffĂ©rentes rĂ©solutions
    corecore