2,681 research outputs found

    Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot

    Full text link
    We explore new aspects of assistive living on smart human-robot interaction (HRI) that involve automatic recognition and online validation of speech and gestures in a natural interface, providing social features for HRI. We introduce a whole framework and resources of a real-life scenario for elderly subjects supported by an assistive bathing robot, addressing health and hygiene care issues. We contribute a new dataset and a suite of tools used for data acquisition and a state-of-the-art pipeline for multimodal learning within the framework of the I-Support bathing robot, with emphasis on audio and RGB-D visual streams. We consider privacy issues by evaluating the depth visual stream along with the RGB, using Kinect sensors. The audio-gestural recognition task on this new dataset yields up to 84.5%, while the online validation of the I-Support system on elderly users accomplishes up to 84% when the two modalities are fused together. The results are promising enough to support further research in the area of multimodal recognition for assistive social HRI, considering the difficulties of the specific task. Upon acceptance of the paper part of the data will be publicly available

    Gestural product interaction : development and evaluation of an emotional vocabulary

    Get PDF
    This research explores emotional response to gesture in order to inform future product interaction design. After describing the emergence and likely role of full-body interfaces with devices and systems, the importance of emotional reaction to the necessary movements and gestures is outlined. A gestural vocabulary for the control of a web page is then presented, along with a semantic differential questionnaire for its evaluation. An experiment is described where users undertook a series of web navigation tasks using the gestural vocabulary, then recorded their reaction to the experience. A number of insights were drawn on the context, precision, distinction, repetition and scale of gestures when used to control or activate a product. These insights will be of help in interaction design, and provide a basis for further development of the gestural vocabulary

    Physicality in technological interface design

    Get PDF
    This research explores emotional response to gesture in order to inform future product interaction design. After describing the emergence and likely role of full-body interfaces with devices and systems, the importance of emotional reaction to the necessary movements and gestures is outlined. A gestural vocabulary for the control of a web page is then presented, along with a semantic differential questionnaire for its evaluation. An experiment is described where users undertook a series of web navigation tasks using the gestural vocabulary, then recorded their reaction to the experience. A number of insights were drawn on the context, precision, distinction, repetition and scale of gestures when used to control or activate a product. These insights will be of help in interaction design, and provide a basis for further development of gestural vocabularies

    A game as a tool for empirical research on the shamanic interface concept

    Get PDF
    Comunicação apresentada na SciTecIn15 - Conferência Ciências e Tecnologias da Interação, realizada em Coimbra, de 12-13 de novembro de 2015A Shamanic Interface is a recent concept that posits that the acknowledgment of culture in gestural commands may contribute to richer and more powerful user interaction with abstract concepts and complexity, but has a lack of empirical validation. Hence, this paper presents a game developed as an empirical research tool for data collection and testing on shamanic interfaces. The game is a small maze where users use gestures to control a character to reach the end of each level. The control gestures performed by each user are captured with a Leap Motion controller and recognized through Hidden Markov Models. Three command sets were implemented: Portuguese cultural gestures, Dutch cultural gestures, and a generic set. This paper evaluates the game with different users to check its playability. We conclude that the game can be used as a research data-collection tool as is, but also acknowledge several playability-related improvement recommendations

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    Exploring haptic interfacing with a mobile robot without visual feedback

    Get PDF
    Search and rescue scenarios are often complicated by low or no visibility conditions. The lack of visual feedback hampers orientation and causes significant stress for human rescue workers. The Guardians project [1] pioneered a group of autonomous mobile robots assisting a human rescue worker operating within close range. Trials were held with fire fighters of South Yorkshire Fire and Rescue. It became clear that the subjects by no means were prepared to give up their procedural routine and the feel of security they provide: they simply ignored instructions that contradicted their routines

    Usability of vision-based interfaces

    Get PDF
    Vision-based interfaces can employ gestures to interact with an interactive system without touching it. Gestures are frequently modelled in laboratories, and usability testing should be carried out. However, often these interfaces present usability issues, and the great diversity of uses of these interfaces and the applications where they are used, makes it difficult to decide which factors to take into account in a usability test. In this paper, we review the literature to compile and analyze the usability factors and metrics used for vision-based interfaces.Postprint (published version

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes
    corecore