107 research outputs found

    Multidimensional tactons for non-visual information presentation in mobile devices

    Get PDF
    Tactons are structured vibrotactile messages which can be used for non-visual information presentation when visual displays are limited, unavailable or inappropriate, such as in mobile phones and other mobile devices. Little is yet known about how to design them effectively. Previous studies have investigated the perception of Tactons which encode two dimensions of information using two different vibrotactile parameters (rhythm and roughness) and found recognition rates of around 70. When more dimensions of information are required it may be necessary to extend the parameter-space of these Tactons. Therefore this study investigates recognition rates for Tactons which encode a third dimension of information using spatial location. The results show that identification rate for three-parameter Tactons is just 48, but that this can be increased to 81 by reducing the number of values of one of the parameters. These results will aid designers to select suitable Tactons for use when designing mobile displays

    An Empirical Evaluation On Vibrotactile Feedback For Wristband System

    Full text link
    With the rapid development of mobile computing, wearable wrist-worn is becoming more and more popular. But the current vibrotactile feedback patterns of most wrist-worn devices are too simple to enable effective interaction in nonvisual scenarios. In this paper, we propose the wristband system with four vibrating motors placed in different positions in the wristband, providing multiple vibration patterns to transmit multi-semantic information for users in eyes-free scenarios. However, we just applied five vibrotactile patterns in experiments (positional up and down, horizontal diagonal, clockwise circular, and total vibration) after contrastive analyzing nine patterns in a pilot experiment. The two experiments with the same 12 participants perform the same experimental process in lab and outdoors. According to the experimental results, users can effectively distinguish the five patterns both in lab and outside, with approximately 90% accuracy (except clockwise circular vibration of outside experiment), proving these five vibration patterns can be used to output multi-semantic information. The system can be applied to eyes-free interaction scenarios for wrist-worn devices.Comment: 10 pages

    Design of a serious game for learning vibrotactile messages

    Get PDF
    To prevent accidental falls, we have designed an augmented shoe aiming at assisting a user when walking. For this, the risk level (low, medium, high and very high) represented by the current situation is conveyed to the user through vibrotactile messages. In this paper, we describe the design of a serious game dedicated to learning of these signals. The game is centered on a virtual maze, whose parts are associated with the four risk levels. To explore this maze, fitted with a pair of the augmented shoes, the user is invited to walk in a room, completely empty, whose dimensions are mapped to those of the virtual maze. When moving, for each area explored the corresponding signal is delivered to the user through the augmented shoes. An initial experiment confirmed the idea that vibrotactile messages can serve for communicating the level of risk

    A first investigation into the effectiveness of Tactons

    Get PDF
    This paper reports two experiments relating to the design of Tactons (or tactile icons). The first experiment investigated perception of vibro-tactile "roughness" (created using amplitude modulated sinusoids), and the results indicated that roughness could be used as a parameter for constructing Tactons. The second experiment is the first full evaluation of Tactons, and uses three values of roughness identified in the first experiment, along with three rhythms to create a set of Tactons. The results of this experiment showed that Tactons could be a successful means of communicating information in user interfaces, with an overall recognition rate of 71%, and recognition rates of 93% for rhythm and 80% for roughness

    Perspectives on the Evolution of Tactile, Haptic, and Thermal Displays

    Get PDF

    Investigating perceptual congruence between information and sensory parameters in auditory and vibrotactile displays

    Get PDF
    A fundamental interaction between a computer and its user(s) is the transmission of information between the two and there are many situations where it is necessary for this interaction to occur non-visually, such as using sound or vibration. To design successful interactions in these modalities, it is necessary to understand how users perceive mappings between information and acoustic or vibration parameters, so that these parameters can be designed such that they are perceived as congruent. This thesis investigates several data-sound and data-vibration mappings by using psychophysical scaling to understand how users perceive the mappings. It also investigates the impact that using these methods during design has when they are integrated into an auditory or vibrotactile display. To investigate acoustic parameters that may provide more perceptually congruent data-sound mappings, Experiments 1 and 2 explored several psychoacoustic parameters for use in a mapping. These studies found that applying amplitude modulation — or roughness — to a signal, or applying broadband noise to it resulted in performance which were similar to conducting the task visually. Experiments 3 and 4 used scaling methods to map how a user perceived a change in an information parameter, for a given change in an acoustic or vibrotactile parameter. Experiment 3 showed that increases in acoustic parameters that are generally considered undesirable in music were perceived as congruent with information parameters with negative valence such as stress or danger. Experiment 4 found that data-vibration mappings were more generalised — a given increase in a vibrotactile parameter was almost always perceived as an increase in an information parameter — regardless of the valence of the information parameter. Experiments 5 and 6 investigated the impact that using results from the scaling methods used in Experiments 3 and 4 had on users' performance when using an auditory or vibrotactile display. These experiments also explored the impact that the complexity of the context which the display was placed had on user performance. These studies found that using mappings based on scaling results did not significantly impact user's performance with a simple auditory display, but it did reduce response times in a more complex use-case

    Crossmodal audio and tactile interaction with mobile touchscreens

    Get PDF
    Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective. A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent. Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established. The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems

    Developing an interactive overview for non-visual exploration of tabular numerical information

    Get PDF
    This thesis investigates the problem of obtaining overview information from complex tabular numerical data sets non-visually. Blind and visually impaired people need to access and analyse numerical data, both in education and in professional occupations. Obtaining an overview is a necessary first step in data analysis, for which current non-visual data accessibility methods offer little support. This thesis describes a new interactive parametric sonification technique called High-Density Sonification (HDS), which facilitates the process of extracting overview information from the data easily and efficiently by rendering multiple data points as single auditory events. Beyond obtaining an overview of the data, experimental studies showed that the capabilities of human auditory perception and cognition to extract meaning from HDS representations could be used to reliably estimate relative arithmetic mean values within large tabular data sets. Following a user-centred design methodology, HDS was implemented as the primary form of overview information display in a multimodal interface called TableVis. This interface supports the active process of interactive data exploration non-visually, making use of proprioception to maintain contextual information during exploration (non-visual focus+context), vibrotactile data annotations (EMA-Tactons) that can be used as external memory aids to prevent high mental workload levels, and speech synthesis to access detailed information on demand. A series of empirical studies was conducted to quantify the performance attained in the exploration of tabular data sets for overview information using TableVis. This was done by comparing HDS with the main current non-visual accessibility technique (speech synthesis), and by quantifying the effect of different sizes of data sets on user performance, which showed that HDS resulted in better performance than speech, and that this performance was not heavily dependent on the size of the data set. In addition, levels of subjective workload during exploration tasks using TableVis were investigated, resulting in the proposal of EMA-Tactons, vibrotactile annotations that the user can add to the data in order to prevent working memory saturation in the most demanding data exploration scenarios. An experimental evaluation found that EMA-Tactons significantly reduced mental workload in data exploration tasks. Thus, the work described in this thesis provides a basis for the interactive non-visual exploration of a broad range of sizes of numerical data tables by offering techniques to extract overview information quickly, performing perceptual estimations of data descriptors (relative arithmetic mean) and managing demands on mental workload through vibrotactile data annotations, while seamlessly linking with explorations at different levels of detail and preserving spatial data representation metaphors to support collaboration with sighted users

    Multicode Vibrotactile Displays to Support Mulitasking Performance in Complex Domains

    Get PDF
    The task sets for operators in many data-rich domains are characterized by high mental workload and the need for effective attention management, so the ability to effectively divide attention among multiple tasks and sources of task-relevant data is essential. With increasing technological advances, more and more sources of task-relevant data are being introduced in these already complex domains, thus introducing an increased risk of “data overload” – a cognitive burden which can lead to a substantial decline in operator performance. To combat this risk, it is important to consider how to best display the information for more efficient attention allocation and task management and thus improved overall multitask performance. A great deal of display design research has been centered around redundancy in multisensory information presentation, i.e., the presentation of identical information via two or more sensory channels, as a means to better support multitasking performance. One example is a display that delivers the same message via auditory speech and visual text. This redundant display of information may allow a multitasking operator to access the message via either channel, presumably the one less-loaded at the time. However, models of human information processing (such as multiple resource theory; MRT) as well as prior studies demonstrate a need for more than consideration of the sensory modality, but also consideration of the working memory functions engaged to interpret the encoded message. This dissertation proposal expounds the concept of multi-processing code redundancy, which makes use of both spatial and nonspatial working memory functions to deliver information. The primary aim of this research is to investigate how the introduction of a multicode vibrotactile display (one that presents identical information using two dimensions of tactile display) will affect overall multitasking performance when processing demands for concurrent tasks vary over time. Three studies were performed to gain an understating of the benefits and limitations of a discrete and a continuously-informing multicode display when concurrent tasks have changing processing demands. Findings of this dissertation illustrate that multicode redundancy shows promise for combating processing code interference described by MRT (by allowing either processing code to be engaged in message interpretation) and may prove beneficial in complex domains that involve concurrent tasks with competing working memory resources

    Somatic ABC's: A Theoretical Framework for Designing, Developing and Evaluating the Building Blocks of Touch-Based Information Delivery

    Get PDF
    abstract: Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.Dissertation/ThesisPh.D. Computer Science 201
    • …
    corecore