477 research outputs found

    A Review of Multimodal Interaction Technique in Augmented Reality Environment

    Get PDF
    Augmented Reality (AR) has proposed several types of interaction techniques such as 3D interactions, natural interactions, tangible interactions, spatial awareness interactions and multimodal interactions. Usually, interaction technique in AR involve unimodal interaction technique that only allows user to interact with AR content by using one modality such as gesture, speech, click, etc. Meanwhile, the combination of more than one modality is called multimodal. Multimodal can contribute to human and computer interaction more efficient and will enhance better user experience. This is because, there are a lot of issues have been found when user use unimodal interaction technique in AR environment such as fat fingers. Recent research has shown that multimodal interface (MMI) has been explored in AR environment and has been applied in various domain. This paper presents an empirical study of some of the key aspects and issues in multimodal interaction augmented reality, touching on the interaction technique and system framework. We reviewed the question of what are the interaction techniques that have been used to perform a multimodal interaction in AR environment and what are the integrated components applied in multimodal interaction AR frameworks. These two questions were used to be analysed in order to find the trends in multimodal field as a main contribution of this paper. We found that gesture, speech and touch are frequently used to manipulate virtual object. Most of the integrated component in MMI AR framework discussed only on the concept of the framework components or the information centred design between the components. Finally, we conclude this paper by providing ideas for future work involving this field

    Integrated Framework Design for Intelligent Human Machine Interaction

    Get PDF
    Human-computer interaction, sometimes referred to as Man-Machine Interaction, is a concept that emerged simultaneously with computers, or more generally machines. The methods by which humans have been interacting with computers have traveled a long way. New designs and technologies appear every day. However, computer systems and complex machines are often only technically successful, and most of the time users may find them confusing to use; thus, such systems are never used efficiently. Therefore, building sophisticated machines and robots is not the only thing someone has to address; in fact, more effort should be put to make these machines simpler for all kind of users, and generic enough to accommodate different types of environments. Thus, designing intelligent human computer interaction modules come to emerge. In this work, we aim to implement a generic framework (referred to as CIMF framework) that allows the user to control the synchronized and coordinated cooperative type of work that a set of robots can perform. Three robots are involved so far: Two manipulators and one mobile robot. The framework should be generic enough to be hardware independent and to allow the easy integration of new entities and modules. We also aim to implement the different building blocks for the intelligent manufacturing cell that communicates with the framework via the most intelligent and advanced human computer interaction techniques. Three techniques shall be addressed: Interface-, audio-, and visual-based type of interaction

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Case Study on Human-Robot Interaction of the Remote-Controlled Service Robot for Elderly and Disabled Care

    Get PDF
    The tendency of continuous aging of the population and the increasing number of people with mobility difficulties leads to increased research in the field of Assistive Service Robotics. These robots can help with daily life tasks such as reminding to take medications, serving food and drinks, controlling home appliances and even monitoring health status. When talking about assisting people in their homes, it should be noted that they will, most of the time, have to communicate with the robot themselves and be able to manage it so that they can get the most out of the robot's services. This research is focused on different methods of remote control of a mobile robot equipped with robotic manipulator. The research investigates in detail methods based on control via gestures, voice commands, and web-based graphical user interface. The capabilities of these methods for Human-Robot Interaction (HRI) have been explored in terms of usability. In this paper, we introduce a new version of the robot Robco 19, new leap motion sensor control of the robot and a new multi-channel control system. The paper presents methodology for performing the HRI experiments from human perception and summarizes the results in applications of the investigated remote control methods in real life scenarios

    Configuration of skilled tasks for execution in multipurpose and collaborative service robots

    Get PDF
    Several highly versatile mobile robots have been introduced during the last ten years. Some of these robots are working among people in exhibitions and other public places, such as museums and shopping centers. Unlike industrial robots, which are typically found only in manufacturing environments, service robots can be found in a variety of places, ranging from homes to offices, and from hospitals to restaurants. Developing mobile robots working co-operatively with humans raises not only interaction problems but problems in getting tasks accomplished. In an unstructured and dynamic environment this is not readily achievable because of the high degree of complexity of perception and motion of the robots. Such tasks require high-level perception and locomotion systems, not to mention control systems for all levels of task control. The lowest levels are controlling the motors and sensors of the robots and the highest are sophisticated task planners for complex and useful tasks. Human-friendly communication can be seen as an important factor in getting robots into our homes. In this work a new task configuration concept is proposed for multipurpose service robots. The concept gives guidelines for a software architecture and task managing system. Task configuration process presents a new method which makes it easier to configure a new task for a robot. The idea is the same as when a person tells another how a task should be performed. Novel method for executing tasks with service robots is also presented. Interpretive execution, keeping the focus on only one micro task at a time, makes it possible to modify plans during their execution. Multimodal interaction is important feature to provide collaboration between humans and robots. Multimodal interaction reduces the workload of the user by administering task configuration and execution. A novel solution for using multimodal human-robot interaction (HRI) as a part of the task description is presented. This thesis is a case study reporting the results when developing a task managing (from configuring to execution) platform for multipurpose service robots and studying its performance and use with several test cases. The platform that was developed has been implemented with the WorkPartner multipurpose service robot. The structure and operation of the platform have proved to be useful and several tasks have been carried out successfully

    A Posture Sequence Learning System for an Anthropomorphic Robotic Hand

    Get PDF
    The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control
    • …
    corecore