3,658 research outputs found

    ICANDO: Intellectual Computer AssistaNt for Disabled Operators

    Get PDF
    Publication in the conference proceedings of EUSIPCO, Florence, Italy, 200

    Interactive voice response system and eye-tracking interface in assistive technology for disabled

    Get PDF
    Abstract. The development of ICT has been very fast in the last few decades and it is important that everyone can benefit from this progress. It is essential for designing user interfaces to keep up on this progress and ensure the usability and accessibility of new innovations. The purpose of this academic literature review has been to study the basics of multimodal interaction, emphasizing on context with multimodal assistive technology for disabled people. From various modalities, interactive voice response and eye-tracking were chosen for analysis. The motivation for this work is to study how technology can be harnessed for assisting disabled people in daily life

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction

    Full text link
    Assistive robots can potentially improve the quality of life and personal independence of elderly people by supporting everyday life activities. To guarantee a safe and intuitive interaction between human and robot, human intentions need to be recognized automatically. As humans communicate their intentions multimodally, the use of multiple modalities for intention recognition may not just increase the robustness against failure of individual modalities but especially reduce the uncertainty about the intention to be predicted. This is desirable as particularly in direct interaction between robots and potentially vulnerable humans a minimal uncertainty about the situation as well as knowledge about this actual uncertainty is necessary. Thus, in contrast to existing methods, in this work a new approach for multimodal intention recognition is introduced that focuses on uncertainty reduction through classifier fusion. For the four considered modalities speech, gestures, gaze directions and scene objects individual intention classifiers are trained, all of which output a probability distribution over all possible intentions. By combining these output distributions using the Bayesian method Independent Opinion Pool the uncertainty about the intention to be recognized can be decreased. The approach is evaluated in a collaborative human-robot interaction task with a 7-DoF robot arm. The results show that fused classifiers which combine multiple modalities outperform the respective individual base classifiers with respect to increased accuracy, robustness, and reduced uncertainty.Comment: Submitted to IROS 201
    • 

    corecore