291 research outputs found

    Non-speech auditory output

    Get PDF
    No abstract available

    "Spindex" (speech index) enhances menu navigation user experience of touch screen devices in various input gestures: tapping, wheeling, and flicking

    Get PDF
    In a large number of electronic devices, users interact with the system by navigating through various menus. Auditory menus can complement or even replace visual menus, so research on auditory menus has recently increased with mobile devices as well as desktop computers. Despite the potential importance of auditory displays on touch screen devices, little research has been attempted to enhance the effectiveness of auditory menus for those devices. In the present study, I investigated how advanced auditory cues enhance auditory menu navigation on a touch screen smartphone, especially for new input gestures such as tapping, wheeling, and flicking methods for navigating a one-dimensional menu. Moreover, I examined if advanced auditory cues improve user experience, not only for visuals-off situations, but also for visuals-on contexts. To this end, I used a novel auditory menu enhancement called a "spindex" (i.e., speech index), in which brief audio cues inform the users of where they are in a long menu. In this study, each item in a menu was preceded by a sound based on the item's initial letter. One hundred and twenty two undergraduates navigated through an alphabetized list of 150 song titles. The study was a split-plot design with manipulated auditory cue type (text-to-speech (TTS) alone vs. TTS plus spindex), visual mode (on vs. off), and input gesture style (tapping, wheeling, and flicking). Target search time and subjective workload for the TTS + spindex were lower than those of the TTS alone in all input gesture types regardless of visual type. Also, on subjective ratings scales, participants rated the TTS + spindex condition higher than the plain TTS on being 'effective' and 'functionally helpful'. The interaction between input methods and output modes (i.e., auditory cue types) and its effects on navigation behaviors was also analyzed based on the two-stage navigation strategy model used in auditory menus. Results were discussed in analogy with visual search theory and in terms of practical applications of spindex cues.M.S.Committee Chair: Bruce N. Walker; Committee Member: Frank Durso; Committee Member: Gregory M. Cors

    Using Sound to Represent Uncertainty in Spatial Data

    Get PDF
    There is a limit to the amount of spatial data that can be shown visually in an effective manner, particularly when the data sets are extensive or complex. Using sound to represent some of these data (sonification) is a way of avoiding visual overload. This thesis creates a conceptual model showing how sonification can be used to represent spatial data and evaluates a number of elements within the conceptual model. These are examined in three different case studies to assess the effectiveness of the sonifications. Current methods of using sonification to represent spatial data have been restricted by the technology available and have had very limited user testing. While existing research shows that sonification can be done, it does not show whether it is an effective and useful method of representing spatial data to the end user. A number of prototypes show how spatial data can be sonified, but only a small handful of these have performed any user testing beyond the authors’ immediate colleagues (where n > 4). This thesis creates and evaluates sonification prototypes, which represent uncertainty using three different case studies of spatial data. Each case study is evaluated by a significant user group (between 45 and 71 individuals) who completed a task based evaluation with the sonification tool, as well as reporting qualitatively their views on the effectiveness and usefulness of the sonification method. For all three case studies, using sound to reinforce information shown visually results in more effective performance from the majority of the participants than traditional visual methods. Participants who were familiar with the dataset were much more effective at using the sonification than those who were not and an interactive sonification which requires significant involvement from the user was much more effective than a static sonification, which did not provide significant user engagement. Using sounds with a clear and easily understood scale (such as piano notes) was important to achieve an effective sonification. These findings are used to improve the conceptual model developed earlier in this thesis and highlight areas for future research

    Creating and evaluating embodied interactive experiences: case studies of full-body, sonic and tactile enaction.

    Get PDF
    This thesis contributes to the field of embodied and multimodal interaction by presenting the development of different original interactive systems. Using a constructive approach, a variety of real-time user interaction situations were designed and tested, two cases of human-virtual character bodily interaction, two interactive sonifications of trampoline jumping, collaborative interaction in mobile music performance and tangible and tactile interaction with virtual sounds. While diverse in terms of application, all the explored interaction techniques belong to the context of augmentation and are grounded in the theory of embodiment and strategies for natural human-computer interaction (HCI). The cases have been contextualized within the umbrella of enaction, a paradigm of cognitive science that addresses the user as an embodied agent situated in an environment and coupled to it through sensorimotor activity. This activity of sensing and action is studied through different modalities: auditory, tactile and visual and combinations of these. The designed applications aim at a natural interaction with the system, being full-body, tangible and spatially aware. Particularly sonic interaction has been explored in the context of music creation, sports and auditory display. These technology-mediated scenarios are evaluated in order to understand what the adopted interaction techniques can bring to the user experience, how they modify impressions and enjoyment. The publications also discuss the enabling technologies used for the development, including motion tracking and programmed hardware for the tactile-sonic interaction and sonic and tangible interaction. Results show that combining full-body interaction with auditory augmentation and sonic interaction can modify the perception, observed behavior and emotion during the experience. Using spatial interaction together with tangible interaction or tactile feedback provides for a multimodal experience of exploring a mixed reality environment where audio can be accessed and manipulated with natural interaction. Embodied and spatial interaction brings playfulness to a mobile music improvisation, shifting the focus of the experience from music-making towards movement-based gaming. Finally, two novel implementations of full-body interaction based on the enactive paradigm are presented. In these designed scenarios of enaction the participant is motion tracked and a virtual character rendered as a stick figure is displayed in front of her on a screen. Results from the user studies show how the involvement of the body is crucial in understanding the behavior of a virtual character or a digital representation of the self in a gaming scenario

    INPUT TECHNOLOGIES AND TECHNIQUES

    Full text link

    A toolkit of resource-sensitive, multimodal widgets

    Get PDF
    This thesis describes an architecture for a toolkit of user interface components which allows the presentation of the widgets to use multiple output modalities - typically, audio and visual. Previously there was no toolkit of widgets which would use the most appropriate presentational resources according to their availability and suitability. Typically the use of different forms of presentation was limited to graphical feedback with the addition of other forms of presentation, such as sound, being added in an ad hoc fashion with only limited scope for managing the use of the different resources. A review of existing auditory interfaces provided some requirements that the toolkit would need to fulfil for it to be effective. In addition, it was found that a strand of research in this area required further investigation to ensure that a full set of requirements was captured. It was found that no formal evaluation of audio being used to provide background information has been undertaken. A sonically-enhanced progress indicator was designed and evaluated showing that audio feedback could be used as a replacement for visual feedback rather than simply as an enhancement. The experiment also completed the requirements capture for the design of the toolkit of multimodal widgets. A review of existing user interface architectures and systems, with particular attention paid to the way they manage multiple output modalities presented some design guidelines for the architecture of the toolkit. Building on these guidelines a design for the toolkit which fulfils all the previously captured requirements is presented. An implementation of this design is given, with an evaluation of the implementation showing that it fulfils all the requirements of the desig

    Crossmodal audio and tactile interaction with mobile touchscreens

    Get PDF
    Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective. A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent. Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established. The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems

    An analysis of interaction in the context of wearable computers

    Get PDF
    The focus of this thesis is on the evaluation of input modalities for generic input tasks, such inputting text and pointer based interaction. In particular, input systems that can be used within a wearable computing system are examined in terms of human-wearable computer interaction. The literature identified a lack of empirical research into the use of input devices for text input and pointing, when used as part of a wearable computing system. The research carried out within this thesis took an approach that acknowledged the movement condition of the user of a wearable system, and evaluated the wearable input devices while the participants were mobile and stationary. Each experiment was based on the user's time on task, their accuracy, and a NASA TLX assessment which provided the participant's subjective workload. The input devices assessed were 'off the shelf' systems. These were chosen as they are readily available to a wider range of users than bespoke inpu~ systems. Text based input was examined first. The text input systems evaluated were: a keyboard,; an on-screen keyboard, a handwriting recognition system, a voice 'recognition system and a wrist- keyboard (sometimes known as a wrist-worn keyboard). It was found that the most appropriate text input system to use overall, was the handwriting recognition system, (This is forther explored in the discussion of Chapters three and seven.) The text input evaluations were followed by a series of four experiments that examined pointing devices, and assessed their appropriateness as part of a wearable computing system. The devices were; an off-table mouse, a speech recognition system, a stylus and a track-pad. These were assessed in relation to the following generic pointing tasks: target acquisition, dragging and dropping, and trajectory-based interaction. Overall the stylus was found to be the most appropriate input device for use with a wearable system, when used as a pointing device. (This isforther covered in Chapters four to six.) By completing this series of experiments, evidence has been scientifically established that can support both a wearable computer designer and a wearable user's choice of input device. These choices can be made in regard to generic interface task activities such as: inputting text, target acquisition, dragging and dropping and trajectory-based interaction.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore