1,193 research outputs found
A Novel Skin-Stretch Haptic Device for Intuitive Control of Robotic Prostheses and Avatars
Without proprioception, i.e., the intrinsic capability of a body to perceive its own limb position, completing daily life activities would require constant visual attention and it would be challenging or even impossible. This situation is similar to the one experienced after limb amputation and in robotic tele-operation, where the natural sensory-motor loop is broken. While some promising solutions based on skin stretch sensory substitution have been proposed to restore tactile properties in these conditions, there is still room for enhancing the intuitiveness of stimulus delivery and integration of haptic feedback devices within user's body. To contribute to this goal, here, we propose a wearable device based on skin stretch stimulation, the Stretch-Pro, which can provide proprioceptive information on artificial hand aperture. This system can be suitably integrated in a prosthetic socket or can be easily worn by a user controlling remote robots. The system can imitate the stretching of the skin that would naturally occur on the intact limb, when it is used to accomplish motor tasks. Two versions of the system are presented, with one and two actuators, respectively, which deliver the stretch stimulus in different ways. Experiments with able-bodied participants and a preliminary test with one prosthesis user are reported. Results suggest that Stretch-Pro could be a viable solution to convey proprioceptive cues to upper limb prosthesis users, opening promising perspectives for tele-robotics applications
Optimizing The Design Of Multimodal User Interfaces
Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator\u27s information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments
Recommended from our members
Rhythmic Haptic Cueing for Gait Rehabilitation of Hemiparetic Stroke and Brain Injury Survivors
This thesis explores the gait rehabilitation of hemiparetic stroke and brain injury survivors by a process of haptic entrainment to rhythmic cues.
Entrainment to auditory metronomes is known to improve gait; this thesis presents the first systematic study of entrainment for gait rehabilitation via the haptic modality.
To investigate this approach, a multi-limb metronome capable of delivering a steady, isochronous haptic rhythm to alternating legs was developed, purpose-built for gait rehabilitation, together with appropriate software for monitoring and assessing gait.
A formative observational study, carried out at a specialised neurological centre, supplemented by discussions with physiotherapists and neuropsychologists, was used to focus the scope on hemiparetic stroke and brain injury. A second formative study used a technology probe approach to explore the behaviour of hemiparetic participants under haptic cueing using a pre-existing prototype. Qualitative data was collected by observation of, and discussion with, participants and health professionals.
In preparation for a quantitative gait study, a formal experiment was carried out to identify a workable range for haptic entrainment. This led to the creation of a procedure to screen out those with cognitive difficulties entraining to a rhythm, regardless of their walking ability.
The final study was a quantitative gait study combining temporal and spatial data on haptically cued participants with hemiparetic stroke and brain injury. Gait characteristics were measured before, during and after cueing. All successfully screened participants were able to synchronise their steps to a haptically presented rhythm. For a substantial proportion of participants, an immediate (though not necessarily lasting) improvement of temporal gait characteristics was found during cueing. Some improvements over baseline occurred immediately afterwards, rather than during, haptic cueing.
Design issues and trade-offs are identified, and interactions between perception, sensory deficit, attention, memory, cognitive load and haptic entrainment are noted
Somatic ABC's: A Theoretical Framework for Designing, Developing and Evaluating the Building Blocks of Touch-Based Information Delivery
abstract: Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.Dissertation/ThesisPh.D. Computer Science 201
Recommended from our members
Examining the sense of agency in human-computer interaction
Humans are agents, we feel that we control the course of events on our everyday life. This refers to the Sense of Agency (SoA). This experience is not only crucial in our daily life, but also in our interaction with technology. When we manipulate a user interface (e.g., computer, smartphone, etc.), we expect that the system responds to our input commands with feedback, as we desire to feel that we are in charge of the interaction. If this interplay elicits a SoA, then the user will perceive an instinctive feeling of “I am controlling this”. Although research in Human-Computer Interaction (HCI) pursuits the design of intuitive and responsive systems, most of the current studies have been focussed mainly on interaction techniques (e.g., software-hardware) and User Experience (UX) (e.g., comfort, usability, etc.), and very little has been investigated in terms of the SoA i.e., the conscious experience of being in control regarding the interaction. In this thesis, we present an experimental exploration of the role of the SoA in interaction paradigms typical of HCI. After two chapters of introduction and related work, we describe a series of studies that explore agency implication in interaction with systems through human senses such as vision, audio, touch and smell. Chapter 3 explores the SoA in mid-air haptic interaction through touchless actions. Then, Chapter 4 examines agency modulation through smell and its application for olfactory interfaces. Chapter 5 describes two novel timing techniques based on auditory and haptic cues that provide alternative timing methods to the traditional Libet clock. Finally, we conclude with a discussion chapter that highlights the importance of our SoA during interactions with technology as well as the implications of the results found, in the design of user interfaces
Principles and Guidelines for Advancement of Touchscreen-Based Non-visual Access to 2D Spatial Information
Graphical materials such as graphs and maps are often inaccessible to millions of blind and visually-impaired (BVI) people, which negatively impacts their educational prospects, ability to travel, and vocational opportunities. To address this longstanding issue, a three-phase research program was conducted that builds on and extends previous work establishing touchscreen-based haptic cuing as a viable alternative for conveying digital graphics to BVI users. Although promising, this approach poses unique challenges that can only be addressed by schematizing the underlying graphical information based on perceptual and spatio-cognitive characteristics pertinent to touchscreen-based haptic access. Towards this end, this dissertation empirically identified a set of design parameters and guidelines through a logical progression of seven experiments.
Phase I investigated perceptual characteristics related to touchscreen-based graphical access using vibrotactile stimuli, with results establishing three core perceptual guidelines: (1) a minimum line width of 1mm should be maintained for accurate line-detection (Exp-1), (2) a minimum interline gap of 4mm should be used for accurate discrimination of parallel vibrotactile lines (Exp-2), and (3) a minimum angular separation of 4mm should be used for accurate discrimination of oriented vibrotactile lines (Exp-3). Building on these parameters, Phase II studied the core spatio-cognitive characteristics pertinent to touchscreen-based non-visual learning of graphical information, with results leading to the specification of three design guidelines: (1) a minimum width of 4mm should be used for supporting tasks that require tracing of vibrotactile lines and judging their orientation (Exp-4), (2) a minimum width of 4mm should be maintained for accurate line tracing and learning of complex spatial path patterns (Exp-5), and (3) vibrotactile feedback should be used as a guiding cue to support the most accurate line tracing performance (Exp-6). Finally, Phase III demonstrated that schematizing line-based maps based on these design guidelines leads to development of an accurate cognitive map. Results from Experiment-7 provide theoretical evidence in support of learning from vision and touch as leading to the development of functionally equivalent amodal spatial representations in memory. Findings from all seven experiments contribute to new theories of haptic information processing that can guide the development of new touchscreen-based non-visual graphical access solutions
Wearable haptic systems for the fingertip and the hand: taxonomy, review and perspectives
In the last decade, we have witnessed a drastic change in the form factor of audio and vision technologies, from heavy and grounded machines to lightweight devices that naturally fit our bodies. However, only recently, haptic systems have started to be designed with wearability in mind. The wearability of haptic systems enables novel forms of communication, cooperation, and integration between humans and machines. Wearable haptic interfaces are capable of communicating with the human wearers during their interaction with the environment they share, in a natural and yet private way. This paper presents a taxonomy and review of wearable haptic systems for the fingertip and the hand, focusing on those systems directly addressing wearability challenges. The paper also discusses the main technological and design challenges for the development of wearable haptic interfaces, and it reports on the future perspectives of the field. Finally, the paper includes two tables summarizing the characteristics and features of the most representative wearable haptic systems for the fingertip and the hand
Multisensory interactive technologies for primary education: From science to technology
While technology is increasingly used in the classroom, we observe at the same time that making teachers and students accept it is more difficult than expected. In this work, we focus on multisensory technologies and we argue that the intersection between current challenges in pedagogical practices and recent scientific evidence opens novel opportunities for these technologies to bring a significant benefit to the learning process. In our view, multisensory technologies are ideal for effectively supporting an embodied and enactive pedagogical approach exploiting the best-suited sensory modality to teach a concept at school. This represents a great opportunity for designing technologies, which are both grounded on robust scientific evidence and tailored to the actual needs of teachers and students. Based on our experience in technology-enhanced learning projects, we propose six golden rules we deem important for catching this opportunity and fully exploiting it
- …