5,229 research outputs found

    Haptic Wave

    Get PDF
    We present the Haptic Wave, a device that allows cross-modal mapping of digital audio to the haptic domain, intended for use by audio producers/engineers with visual impairments. We describe a series of participatory design activities adapted to non-sighted users where the act of prototyping facilitates dialog. A series of workshops scoping user needs, and testing a technology mock up and lo-fidelity prototype fed into the design of a final high-spec prototype. The Haptic Wave was tested in the laboratory, then deployed in real world settings in recording studios and audio production facilities. The cross-modal mapping is kinesthetic and allows the direct manipulation of sound without the translation of an existing visual interface. The research gleans insight into working with users with visual impairments, and transforms perspective to think of them as experts in non-visual interfaces for all users. This received the Best Paper Award at CHI 2016, the most prestigious human-computer interaction conference and one of the top-ranked conferences in computer science

    Mixed Reality Browsers and Pedestrian Navigation in Augmented Cities

    No full text
    International audienceIn this paper, We use a declarative format for positional audio with synchronization between audio chunks using SMIL. This format has been specifically designed for the type of audio used in AR applications. The audio engine associated to this format is running on mobile platforms (iOS, Android). Our MRB browser called IXE use a format based on volunteered geographic information (OpenStreetMap) and OSM documents for IXE can be fully authored in side OSM editors like JOSM. This is in contrast with the other AR browsers like Layar, Juniao, Wikitude, which use a Point of Interest (POI) based format having no notion of ways. This introduces a fundamental difference and in some senses a duality relation between IXE and the other AR browsers. In IXE, Augmented Virtuality (AV) navigation along a route (composed of ways) is central and AR interaction with objects is delegated to associate 3D activities. In AR browsers, navigation along a route is delegated to associated map activities and AR interaction with objects is central. IXE supports multiple tracking technologies and therefore allows both indoor navigation in buildings and outdoor navigation at the level of sidewalks. A first android version of the IXE browser will be released at the end of 2013. Being based on volunteered geographic it will allow building accessible pedestrian networks in augmented cities

    Adapted materials for teaching english to blind and visually impaired students.

    Get PDF
    During the training process as future English teachers, it was possible to observe that there are many challenges teachers have to face in the different classrooms they are assigned to teach their classes. One of them was that in the English classes at Uniminuto, where students with diverse abilities and regular ones attend to this course in a same classroom, it was hard for students with visual difficulties to do the activities proposed for the classes. It was because the book was an essential aspect in the classes and, as they have a visual limitation, they could not follow the class in the same way as their sighted peers and, therefore, they were not able to do the activities in the same way. This research project was carried out with the aim of examining the contribution that using adapted materials made to teaching English to blind and visually impaired students. In order to know that, there were created some activities according to the topics that had to be taught in the course and there were designed materials for all students that helped them develop the activities proposed for the classes

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Multisensory learning in adaptive interactive systems

    Get PDF
    The main purpose of my work is to investigate multisensory perceptual learning and sensory integration in the design and development of adaptive user interfaces for educational purposes. To this aim, starting from renewed understanding from neuroscience and cognitive science on multisensory perceptual learning and sensory integration, I developed a theoretical computational model for designing multimodal learning technologies that take into account these results. Main theoretical foundations of my research are multisensory perceptual learning theories and the research on sensory processing and integration, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models. Finally, a computational model was applied in two case studies, based on two EU ICT-H2020 Projects, "weDRAW" and "TELMI", on which I worked during the PhD

    “Give me happy pop songs in C major and with a fast tempo”: A vocal assistant for content-based queries to online music repositories

    Get PDF
    This paper presents an Internet of Musical Things system devised to support recreational music-making, improvisation, composition, and music learning via vocal queries to an online music repository. The system involves a commercial voice-based interface and the Jamendo cloud-based repository of Creative Commons music content. Thanks to the system the user can query the Jamendo music repository by six content-based features and each combination thereof: mood, genre, tempo, chords, key and tuning. Such queries differ from the conventional methods for music retrieval, which are based on the piece's title and the artist's name. These features were identified following a survey with 112 musicians, which preliminary validated the concept underlying the proposed system. A user study with 20 musicians showed that the system was deemed usable, able to provide a satisfactory user experience, and useful in a variety of musical activities. Differences in the participants’ needs were identified, which highlighted the need for personalization mechanisms based on the expertise level of the user. Importantly, the system was seen as a concrete solution to physical encumbrances that arise from the concurrent use of the instrument and devices providing interactive media resources. Finally, the system offers benefits to visually-impaired musicians

    Haptic Wave: A Cross-Modal Interface for Visually Impaired Audio Producers

    Get PDF
    We present the Haptic Wave, a device that allows cross-modal mapping of digital audio to the haptic domain, intended for use by audio producers/engineers with visual impairments. We describe a series of participatory design activities adapted to non-sighted users where the act of prototyping facilitates dialog. A series of workshops scoping user needs, and test- ing a technology mock up and lo-fidelity prototype fed into the design of a final high-spec prototype. The Haptic Wave was tested in the laboratory, then deployed in real world set- tings in recording studios and audio production facilities. The cross-modal mapping is kinesthetic and allows the direct ma- nipulation of sound without the translation of an existing vi- sual interface. The research gleans insight into working with users with visual impairments, and transforms perspective to think of them as experts in non-visual interfaces for all users

    Electrophysiological correlates and psychoacoustic characteristics of hearing-motion synaesthesia

    Get PDF
    People with hearing-motion synaesthesia experience sounds from moving or changing (e.g. flickering) visual stimuli. This phenomenon may be one of the most common forms of synaesthesia but it has rarely been studied and there are no studies of its neural basis. We screened for this in a sample of 200+ individuals, and estimated a prevalence of 4.2%. We also document its characteristics: it tends to be induced by physically moving stimuli (more so than static stimuli which imply motion or trigger illusory motion); and the psychoacoustic features are simple (e.g. “whooshing”) with some systematic correspondences to vision (e.g. faster movement is higher pitch). We demonstrate using event-related potentials that it emerges from early perceptual processing of vision. The synaesthetes have a higher amplitude motion-evoked N2 (165-185 msec), with some evidence of group differences as early as 55-75 msec. We discuss similarities between hearing-motion synaesthesia and previous observations that visual motion triggers auditory activity in the congenitally deaf. It is possible that both conditions reflect the maintenance of multisensory pathways found in early development that most people lose but can be retained in certain people in response to sensory deprivation (in the deaf) or, in people with normal hearing, as a result of other differences (e.g. genes predisposing to synaesthesia)

    From signal to substance and back: Insights from environmental sound research to auditory display design

    Get PDF
    Presented at the 15th International Conference on Auditory Display (ICAD2009), Copenhagen, Denmark, May 18-22, 2009A persistent concern in the field of auditory display design has been how to effectively use environmental sounds, which are naturally occurring familiar non-speech, non-musical sounds. Environmental sounds represent physical events in the everyday world, and thus they have a semantic content that enables learning and recognition. However, unless used appropriately, their functions in auditory displays may cause problems. One of the main considerations in using environmental sounds as auditory icons is how to ensure the identifiability of the sound sources. The identifiability of an auditory icon depends on both the intrinsic acoustic properties of the sound it represents, and on the semantic fit of the sound to its context, i.e., whether the context is one in which the sound naturally occurs or would be unlikely to occur. Relatively recent research has yielded some insights into both of these factors. A second major consideration is how to use the source properties to represent events in the auditory display. This entails parameterizing the environmental sounds so the acoustics will both relate to source properties familiar to the user and convey meaningful new information to the user. Finally, particular considerations come into play when designing auditory displays for special populations, such as hearing impaired listeners who may not have access to all the acoustic information available to a normal hearing listener, or to elderly or other individuals whose cognitive resources may be diminished. Some guidelines for designing displays for these populations will be outlined
    corecore