22,881 research outputs found
Using Sound to Enhance Users’ Experiences of Mobile Applications
The latest smartphones with GPS, electronic compass, directional audio, touch screens etc. hold potentials for location based services that are easier to use compared to traditional tools. Rather than interpreting maps, users may focus on their activities and the environment around them. Interfaces may be designed that let users search for information by simply pointing in a direction. Database queries can be created from GPS location and compass direction data. Users can get guidance to locations through pointing gestures, spatial sound and simple graphics. This article describes two studies testing prototypic applications with multimodal user interfaces built on spatial audio, graphics and text. Tests show that users appreciated the applications for their ease of use, for being fun and effective to use and for allowing users to interact directly with the environment rather than with abstractions of the same. The multimodal user interfaces contributed significantly to the overall user experience
Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems
When users want to interact with an in-air gesture system, they
must first address it. This involves finding where to gesture
so that their actions can be sensed, and how to direct their
input towards that system so that they do not also affect others
or cause unwanted effects. This is an important problem [6]
which lacks a practical solution. We present an interaction
technique which uses multimodal feedback to help users address
in-air gesture systems. The feedback tells them how
(“do that”) and where (“there”) to gesture, using light, audio
and tactile displays. By doing that there, users can direct their
input to the system they wish to interact with, in a place where
their gestures can be sensed. We discuss the design of our
technique and three experiments investigating its use, finding
that users can “do that” well (93.2%–99.9%) while accurately
(51mm–80mm) and quickly (3.7s) finding “there”
Testing Two Tools for Multimodal Navigation
The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment
Virtual reality: Theoretical basis, practical applications
Virtual reality (VR) is a powerful multimedia visualization technique offering a range of mechanisms by which many new experiences can be made available. This paper deals with the basic nature of VR, the technologies needed to create it, and its potential, especially for helping disabled people. It also offers an overview of some examples of existing VR systems
The development of conversational and communication skills
This thesis investigates the development of children's conversational and communication skills. This is done by investigating both communicative process and outcome in two communication media: face-to-face interaction and audio-only interaction. Communicative outcome is objectively measured by assessing accuracy of performance of communication tasks. A multi-level approach to the assessment of communicative process is taken. Non-verbal aspects of process which are investigated are gaze and gesture. Verbal aspects of process range from global linguistic assessments such as length of conversational turn, to a detailed coding of utterance function according to Conversational Games analysis.
The results show that children of 6 years and less do not adapt to the loss of visual signals in audio-only communication, and their performance suffers. Both the structure of children's dialogues and their use of visual signals were found to differ from that of adults. It is concluded that both verbal and non-verbal communication strategies develop into adulthood. Successful integration of these different aspects of communication is central to being an effective communicator
Recommended from our members
Accessibility of 3D Game Environments for People with Aphasia: An Exploratory Study
People with aphasia experience difficulties with all aspects of language and this can mean that their access to technology is substantially reduced. We report a study undertaken to investigate the issues that confront people with aphasia when interacting with technology, specifically 3D game environments. Five people with aphasia were observed and interviewed in twelve workshop sessions. We report the key themes that emerged from the study, such as the importance of direct mappings between users’ interactions and actions in a virtual environment. The results of the study provide some insight into the challenges, but also the opportunities, these mainstream technologies offer to people with aphasia. We discuss how these technologies could be more supportive and inclusive for people with language and communication difficulties
Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems
When users want to interact with an in-air gesture system, they
must first address it. This involves finding where to gesture
so that their actions can be sensed, and how to direct their
input towards that system so that they do not also affect others
or cause unwanted effects. This is an important problem [6]
which lacks a practical solution. We present an interaction
technique which uses multimodal feedback to help users address
in-air gesture systems. The feedback tells them how
(“do that”) and where (“there”) to gesture, using light, audio
and tactile displays. By doing that there, users can direct their
input to the system they wish to interact with, in a place where
their gestures can be sensed. We discuss the design of our
technique and three experiments investigating its use, finding
that users can “do that” well (93.2%–99.9%) while accurately
(51mm–80mm) and quickly (3.7s) finding “there”
Multisensory learning in adaptive interactive systems
The main purpose of my work is to investigate multisensory perceptual learning and sensory integration in the design and development of adaptive user interfaces for educational purposes. To this aim, starting from renewed understanding from neuroscience and cognitive science on multisensory perceptual learning and sensory integration, I developed a theoretical computational model for designing multimodal learning technologies that take into account these results. Main theoretical foundations of my research are multisensory perceptual learning theories and the research on sensory processing and integration, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models. Finally, a computational model was applied in two case studies, based on two EU ICT-H2020 Projects, "weDRAW" and "TELMI", on which I worked during the PhD
- …