148 research outputs found

    Sensorial substitution system with encoding of visual objects into sounds

    Get PDF
    Abstract : Visual and auditory prostheses involve surgeries that are complex, expensive, and invasive. They are limited to a small number of electrodes and can only be used when the impairment is peripheral. The Vibe and PSVA encode the entire image in one complex sound. The PSVA uses frequencies that are associated with each pixel and increase from left to right and from bottom to top of the image. The Vibe splits the image into several regions that are equivalent to receptive fields. The challenge in this project resides in the design of a suitable encoding of the visual scene into auditory stimuli such that the content of the sound carries the most important characteristics of the visual scene. These sounds should be shaped in a way that the subject can build mental representations of visual scenes even if the information carrier is the auditory pathway. The complex sound is the sum of all single sounds from each segment. One complex sound is generated for the right ear and another one for the left

    RĂ©alisation d’un systĂšme de substitution sensorielle de la vision vers l’audition

    Get PDF
    Ce projet de recherche a Ă©tĂ© menĂ© dans le cadre du groupe de recherche NECOTIS (Neurosciences Computationnelles et Traitement Intelligent du Signal). Ce groupe de recherche agit principalement dans le domaine du traitement de l’image et de l’audio grĂące Ă  des mĂ©thodes de traitement de signal bio-inspirĂ©es. DiffĂ©rentes applications ont Ă©tĂ© dĂ©veloppĂ©es en reconnaissance de la parole, dans la sĂ©paration de sources sonores ou encore en reconnaissance d’images. Bien qu’ils existent depuis plus de quarante ans, les systĂšmes d’aide aux personnes atteintes de dĂ©ficiences visuelles, que cela soit des prothĂšses visuelles (invasif) ou des systĂšme de substitution sensorielle (non invasif), n’ont pas percĂ© dans le milieu du handicap. Il serait difficile d’imputer cet Ă©tat de fait Ă  des limitations technologiques : depuis les premiĂšres approches, les prothĂšses visuelles ou les systĂšmes de substitution sensorielle n’ont cessĂ© de se perfectionner et de se diversifier. Toutefois, si la question de savoir comment transmettre le signal est bien documentĂ©e, la question de savoir quel signal transmettre a Ă©tĂ© plus rarement abordĂ©e. DiffĂ©rents systĂšmes ont Ă©tĂ© dĂ©veloppĂ©s mais le plus impressionnant est le rĂ©cit des utilisateurs de tels systĂšmes. Ainsi, il fait plaisir de lire que l’artiste Neil Harbisson, qui ne voit aucune couleur, explique comment une camĂ©ra attachĂ©e Ă  se tĂȘte lui permet d’entendre des couleurs et ainsi de pouvoir peindre [Montandon, 2004]. Un autre exemple tout aussi impressionnant, la scientifique Wanda DĂ­az-Merced, qui travaille pour xSonify, explique comment elle analyse diffĂ©rentes donnĂ©es en les encodant de façon sonore [Feder, 2012]. C’est dans ce cadre que ce projet de substitution sensorielle de la vision vers l’audition a Ă©tĂ© dĂ©veloppĂ©. En effet, nous avons utilisĂ© le traitement de signal bio-inspirĂ© afin d’extraire diffĂ©rentes caractĂ©ristiques reprĂ©sentatives de la vision. De plus, nous avons essayĂ© de gĂ©nĂ©rer un son agrĂ©able Ă  l’oreille et reprĂ©sentatif de l’environnement dans lequel Ă©volue la personne. Ce projet a donc davantage Ă©tĂ© axĂ© sur la nature du signal transmis Ă  la personne ayant des dĂ©ficiences visuelles

    Digitizing the chemical senses: possibilities & pitfalls

    Get PDF
    Many people are understandably excited by the suggestion that the chemical senses can be digitized; be it to deliver ambient fragrances (e.g., in virtual reality or health-related applications), or else to transmit flavour experiences via the internet. However, to date, progress in this area has been surprisingly slow. Furthermore, the majority of the attempts at successful commercialization have failed, often in the face of consumer ambivalence over the perceived benefits/utility. In this review, with the focus squarely on the domain of Human-Computer Interaction (HCI), we summarize the state-of-the-art in the area. We highlight the key possibilities and pitfalls as far as stimulating the so-called ‘lower’ senses of taste, smell, and the trigeminal system are concerned. Ultimately, we suggest that mixed reality solutions are currently the most plausible as far as delivering (or rather modulating) flavour experiences digitally is concerned. The key problems with digital fragrance delivery are related to attention and attribution. People often fail to detect fragrances when they are concentrating on something else; And even when they detect that their chemical senses have been stimulated, there is always a danger that they attribute their experience (e.g., pleasure) to one of the other senses – this is what we call ‘the fundamental attribution error’. We conclude with an outlook on digitizing the chemical senses and summarize a set of open-ended questions that the HCI community has to address in future explorations of smell and taste as interaction modalities

    Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    Get PDF

    How input modality and visual experience affect the representation of categories in the brain

    Get PDF
    The general aim of the present dissertation was to participate in the progress of our understanding of how sensory input and sensory experience impact on how the human brain implements categorical knowledge. The goal was twofold: (1) understand whether there are brain regions that encode information about different categories regardless of input modality and sensory experience (study 1); (2) deepen the investigation of the mechanisms that drive cross-modal and intra-modal plasticity following early blindness and the way they express during the processing of different categories presented as real-world sounds (study 2). To address these fundamental questions, we used fMRI to characterize the brain responses to different conceptual categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. In study 1, we observed that the right posterior middle temporal gyrus (rpMTG) is the region that most reliably decoded categories and selectively correlated with conceptual models of our stimuli space independently of input modality and visual experience. However, this region maintains separate the representational format from the different modalities, revealing a multimodal rather than an amodal nature. In addition, we observed that VOTC showed distinct functional profiles according to the hemispheric side. The left VOTC showed an involvement in the acoustical categorization processing at the same degree in sighted and in blind individuals. We propose that this involvement might reflect an engagement of the left VOTC in more semantic/linguistic processing of the stimuli potentially supported by its enhanced connection with the language system. However, paralleling our observation in rpMTG, the representations from different modalities are maintained segregated in VOTC, showing little evidence for sensory-abstraction. On the other side, the right VOTC emerged as a sensory-related visual region in sighted with the ability to rewires itself toward acoustical stimulation in case of early visual deprivation. In study 2, we observed opposite effects of early visual deprivation on auditory decoding in occipital and temporal regions. While occipital regions contained more information about sound categories in the blind, the temporal cortex showed higher decoding in the sighted. This unbalance effect was stronger in the right hemisphere where we, also, observed a negative correlation between occipital and temporal decoding of sound categories in EB. These last results suggest that the intramodal and crossmodal reorganizations might be inter-connected. We therefore propose that the extension of non-visual functions in the occipital cortex of EB may trigger a network-level reorganization that reduce the computational load of the regions typically coding for the remaining senses due to the extension of such computation in occipital regions

    Space and time in the human brain

    Get PDF

    Synesthetic hallucinations induced by psychedelic drugs in a congenitally blind man

    Get PDF
    This case report offers rare insights into crossmodal responses to psychedelic drug use in a congenitally blind (CB) individual as a form of synthetic synesthesia. BP's personal experience provides us with a unique report on the psychological and sensory alterations induced by hallucinogenic drugs, including an account of the absence of visual hallucinations, and a compelling look at the relationship between LSD induced synesthesia and crossmodal correspondences. The hallucinatory experiences reported by BP are of particular interest in light of the observation that rates of psychosis within the CB population are extremely low. The phenomenology of the induced hallucinations suggests that experiences acquired through other means, might not give rise to "visual" experiences in the phenomenological sense, but instead gives rise to novel experiences in the other functioning senses.</p

    How Do We Experience Crossmodal Correspondent Mulsemedia Content?

    Get PDF
    Sensory studies emerged as a significant influence upon Human Computer Interaction and traditional multimedia. Mulsemedia is an area that extends multimedia addressing issues of multisensorial response through the combination of at least three media, typically a non-traditional media with traditional audio-visual content. In this paper, we explore the concepts of Quality of Experience and crossmodal correspondences through a case study of different types of mulsemedia setups. The content is designed following principles of crossmodal correspondence between different sensory dimensions and delivered through olfactory, auditory and vibrotactile displays. The Quality of Experience is evaluated through both subjective (questionnaire) and objective means (eye gaze and heart rate). Results show that the auditory experience has an influence on the olfactory sensorial responses and lessens the perception of lingering odor. Heat maps of the eye gazes suggest that the crossmodality between olfactory and visual content leads to an increased visual attention on the factors of the employed crossmodal correspondence (e.g., color, brightness, shape)

    Marketing sonified fragrance: Designing soundscapes for scent

    Get PDF
    Auditory branding is undoubtedly becoming more important across a range of sectors. One area, in particular, that has recently seen significant growth concerns the introduction of music and soundscapes that have been specifically designed to match a particular scent (what one might think of as “audio scents” or “sonic scents”). This represents an exciting new approach to the sensory marketing of fragrance and for industries with strategic sensory goals, such as cosmetics. Crucially, techniques such as the semantic differential technique, as well as the emerging literature on crossmodal correspondences, offer both a mechanistic understanding of, and a practical framework for, those wishing to rigorously align the connotative meaning and conceptual/emotional/sensory associations of sound and scent. These developments have enabled those working in the creative industries to start moving beyond previously popular approaches to matching, or translating between the senses, that were traditionally often based on the idiosyncratic phenomenon of synaesthesia, toward a more scientific approach while nevertheless still enabling/requiring a healthy dose of artistic inspiration. In this narrative historical review, we highlight the various approaches to the systematic matching of sound with scent and review the various marketing activations that have appeared in this space recently
    • 

    corecore