1,694 research outputs found

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl

    "Spindex" (speech index) enhances menu navigation user experience of touch screen devices in various input gestures: tapping, wheeling, and flicking

    Get PDF
    In a large number of electronic devices, users interact with the system by navigating through various menus. Auditory menus can complement or even replace visual menus, so research on auditory menus has recently increased with mobile devices as well as desktop computers. Despite the potential importance of auditory displays on touch screen devices, little research has been attempted to enhance the effectiveness of auditory menus for those devices. In the present study, I investigated how advanced auditory cues enhance auditory menu navigation on a touch screen smartphone, especially for new input gestures such as tapping, wheeling, and flicking methods for navigating a one-dimensional menu. Moreover, I examined if advanced auditory cues improve user experience, not only for visuals-off situations, but also for visuals-on contexts. To this end, I used a novel auditory menu enhancement called a "spindex" (i.e., speech index), in which brief audio cues inform the users of where they are in a long menu. In this study, each item in a menu was preceded by a sound based on the item's initial letter. One hundred and twenty two undergraduates navigated through an alphabetized list of 150 song titles. The study was a split-plot design with manipulated auditory cue type (text-to-speech (TTS) alone vs. TTS plus spindex), visual mode (on vs. off), and input gesture style (tapping, wheeling, and flicking). Target search time and subjective workload for the TTS + spindex were lower than those of the TTS alone in all input gesture types regardless of visual type. Also, on subjective ratings scales, participants rated the TTS + spindex condition higher than the plain TTS on being 'effective' and 'functionally helpful'. The interaction between input methods and output modes (i.e., auditory cue types) and its effects on navigation behaviors was also analyzed based on the two-stage navigation strategy model used in auditory menus. Results were discussed in analogy with visual search theory and in terms of practical applications of spindex cues.M.S.Committee Chair: Bruce N. Walker; Committee Member: Frank Durso; Committee Member: Gregory M. Cors

    May the Force Be with You: Ultrasound Haptic Feedback for Mid-Air Gesture Interaction in Cars

    Get PDF
    The use of ultrasound haptic feedback for mid-air gestures in cars has been proposed to provide a sense of control over the user's intended actions and to add touch to a touchless interaction. However, the impact of ultrasound feedback to the gesturing hand regarding lane deviation, eyes-off-the-road time (EORT) and perceived mental demand has not yet been measured. This paper investigates the impact of uni- and multimodal presentation of ultrasound feedback on the primary driving task and the secondary gesturing task in a simulated driving environment. The multimodal combinations of ultrasound included visual, auditory, and peripheral lights. We found that ultrasound feedback presented uni-modally and bi-modally resulted in significantly less EORT compared to visual feedback. Our results suggest that multimodal ultrasound feedback for mid-air interaction decreases EORT whilst not compromising driving performance nor mental demand and thus can increase safety while driving

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Shared Input Multimodal Mobile Interfaces: Interaction Modality Effects on Menu Selection in Single-task and Dual-task Environments

    Get PDF
    ABSTRACT Audio and visual modalities are two common output channels in the user interfaces embedded in today's mobile devices. However, these user interfaces typically center on the visual modality as the primary output channel, with audio output serving a secondary role. This paper argues for an increased need for shared input multimodal user interfaces for mobile devices. A shared input multimodal interface can be operated independently using a specific output modality, leaving users to choose the preferred method of interaction in different scenarios. We evaluate the value of a shared input multimodal menu system in both a single-task desktop setting and in a dynamic dual-task setting, in which the user was required to interact with the shared input multimodal menu system while driving a simulated vehicle. Results indicate that users were faster at locating a target item in the menu when visual feedback was provided in the single-task desktop setting, but in the dual-task driving setting, visual output presented a significant source of visual distraction that interfered with driving performance. In contrast, auditory output mitigated some of the risk associated with menu selection while driving. A shared input multimodal interface allows users to take advantage of multiple feedback modalities properly, providing a better overall experience

    Shared Input Multimodal Mobile Interfaces: Interaction Modality Effects on Menu Selection in Single-task and Dual-task Environments

    Get PDF
    ABSTRACT Audio and visual modalities are two common output channels in the user interfaces embedded in today's mobile devices. However, these user interfaces typically center on the visual modality as the primary output channel, with audio output serving a secondary role. This paper argues for an increased need for shared input multimodal user interfaces for mobile devices. A shared input multimodal interface can be operated independently using a specific output modality, leaving users to choose the preferred method of interaction in different scenarios. We evaluate the value of a shared input multimodal menu system in both a single-task desktop setting and in a dynamic dual-task setting, in which the user was required to interact with the shared input multimodal menu system while driving a simulated vehicle. Results indicate that users were faster at locating a target item in the menu when visual feedback was provided in the single-task desktop setting, but in the dual-task driving setting, visual output presented a significant source of visual distraction that interfered with driving performance. In contrast, auditory output mitigated some of the risk associated with menu selection while driving. A shared input multimodal interface allows users to take advantage of multiple feedback modalities properly, providing a better overall experience
    • 

    corecore