373 research outputs found

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Gestures for Manually Controlling a Helping Hand Robot

    Get PDF
    Helping hand robots have been the focus of a number of studies and have high potential in modern manufacturing processes and for use in daily living. As helping hand robots interact closely with users, it is important to find natural and intuitive user interfaces for interacting with the robots in various situations. This study describes a set of gestures for interacting with and controlling helping hand robots in situations in which users need to manually control the robot but both hands are not available, for example, when users are holding tools or objects in their hands. The gestures are derived from an experimental study that asked participants for gestures suitable for controlling primitive robot motions. The selected gestures can be used to control translation and orientation of an end effector of a helping hand robot when one or both hands are engaged with tasks. As an example for validating the proposed gestures, we implemented a helping hand robot system to perform a soldering task

    Towards Skill Transfer via Learning-Based Guidance in Human-Robot Interaction

    Get PDF
    This thesis presents learning-based guidance (LbG) approaches that aim to transfer skills from human to robot. The approaches capture the temporal and spatial information of human motions and teach robot to assist human in human-robot collaborative tasks. In such physical human-robot interaction (pHRI) environments, learning from demonstrations (LfD) enables this transferring skill. Demonstrations can be provided through kinesthetic teaching and/or teleoperation. In kinesthetic teaching, humans directly guide robot’s body to perform a task while in teleoperation, demonstrations can be done through motion/vision-based systems or haptic devices. In this work, the LbG approaches are developed through kinesthetic teaching and teleoperation in both virtual and physical environments. First, this thesis compares and analyzes the capability of two types of statistical models, generative and discriminative, to generate haptic guidance (HG) forces as well as segment and recognize gestures for pHRI that can be used in virtual minimally invasive surgery (MIS) training. In this learning-based approach, the knowledge and experience of experts are modeled to improve the unpredictable motions of novice trainees. Two statistical models, hidden Markov model (HMM) and hidden Conditional Random Fields (HCRF), are used to learn gestures from demonstrations in a virtual MIS related task. The models are developed to automatically recognize and segment gestures as well as generate guidance forces. In practice phase, the guidance forces are adaptively calculated in real time regarding gesture similarities among user motion and the gesture models. Both statistical models can successfully capture the gestures of the user and provide adaptive HG, however, results show the superiority of HCRF, as a discriminative method, compared to HMM, as a generative method, in terms of user performance. In addition, LbG approaches are developed for kinesthetic HRI simulations that aim to transfer the skills of expert surgeons to resident trainees. The discriminative nature of HCRF is incorporated into the approach to produce LbG forces and discriminate the skill levels of users. To experimentally evaluate this kinesthetic-based approach, a femur bone drilling simulation is developed in which residents are provided haptic feedback based on real computed tomography (CT) data that enable them to feel the variable stiffness of bone layers. Orthepaedic surgeons require to adjust drilling force since bone layers have different stiffness. In the learning phase, using the simulation, an expert HCRF model is trained from expert surgeons demonstration to learn the stiffness variations of different bone layers. A novice HCRF model is also developed from the demonstration of novice residents to discriminate the skill levels of a new trainee. During the practice phase, the learning-based approach, which encoded the stiffness variations, guides the trainees to perform training tasks similar to experts motions. Finally, in contrast to other parts of the thesis, an LbG approach is developed through teleoperation in physical environment. The approach assists operators to navigate a teleoperated robot through a haptic steering wheel and a haptic gas pedal. A set of expert operator demonstrations are used to develop maneuvering skill model. The temporal and spatial variation of demonstrations are learned using HMM as the skill model. A modified Gaussian Mixture regression (GMR) in combination with the HMM is also developed to robustly produce the motion during reproduction. The GMR calculates outcome motions from a joint probability density function of data rather than directly model the regression function. In addition, the distance between the robot and obstacles is incorporated into the impedance control to generate guidance forces that also assist operators with avoiding obstacle collisions. Using different forms of variable impedance control, guidance forces are computed in real time with respect to the similarities between the maneuver of users and the skill model. This encourages users to navigate a robot similar to the expert operators. The results show that user performance is improved in terms of number of collisions, task completion time, and average closeness to obstacles

    A Hierarchical Architecture for Flexible Human-Robot Collaboration

    Get PDF
    This thesis is devoted to design a software architecture for Human- Robot Collaboration (HRC), to enhance the robots\u2019 abilities for working alongside humans. We propose FlexHRC, a hierarchical and flexible human-robot cooperation architecture specifically designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in tasks with high-variability. Along with FlexHRC, we have introduced novel techniques appropriate for three interleaved levels, namely perception, representation, and action, each one aimed at addressing specific traits of humanrobot cooperation tasks. The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative robots could bring to the whole production process. In this context, a yet unreached enabling technology is the design of robots able to deal at all levels with humans\u2019 intrinsic variability, which is not only a necessary element to a comfortable working experience for humans but also a precious capability for efficiently dealing with unexpected events. Moreover, a flexible assembly of semi-finished products is one of the expected features of next-generation shop-floor lines. Currently, such flexibility is placed on the shoulders of human operators, who are responsible for product variability, and therefore they are subject to potentially high stress levels and cognitive load when dealing with complex operations. At the same time, operations in the shop-floor are still very structured and well-defined. Collaborative robots have been designed to allow for a transition of such burden from human operators to robots that are flexible enough to support them in high-variability tasks while they unfold. As mentioned before, FlexHRC architecture encompasses three perception, action, and representation levels. The perception level relies on wearable sensors for human action recognition and point cloud data for perceiving the object in the scene. The action level embraces four components, the robot execution manager for decoupling action planning from robot motion planning and mapping the symbolic actions to the robot controller command interface, a task Priority framework to control the robot, a differential equation solver to simulate and evaluate the robot behaviour on-the-fly, and finally a random-based method for the robot path planning. The representation level depends on AND/OR graphs for the representation of and the reasoning upon human-robot cooperation models online, a task manager to plan, adapt, and make decision for the robot behaviors, and a knowledge base in order to store the cooperation and workspace information. We evaluated the FlexHRC functionalities according to the application desired objectives. This evaluation is accompanied with several experiments, namely collaborative screwing task, coordinated transportation of the objects in cluttered environment, collaborative table assembly task, and object positioning tasks. The main contributions of this work are: (i) design and implementation of FlexHRC which enables the functional requirements necessary for the shop-floor assembly application such as task and team level flexibility, scalability, adaptability, and safety just a few to name, (ii) development of the task representation, which integrates a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic, (iii) an in-the-loop simulation-based decision making process for the operations of collaborative robots coping with the variability of human operator actions, (iv) the robot adaptation to the human on-the-fly decisions and actions via human action recognition, and (v) the predictable robot behavior to the human user thanks to the task priority based control frame, the introduced path planner, and the natural and intuitive communication of the robot with the human

    A Posture Sequence Learning System for an Anthropomorphic Robotic Hand

    Get PDF
    The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator

    Advanced Mobile Robotics: Volume 3

    Get PDF
    Mobile robotics is a challenging field with great potential. It covers disciplines including electrical engineering, mechanical engineering, computer science, cognitive science, and social science. It is essential to the design of automated robots, in combination with artificial intelligence, vision, and sensor technologies. Mobile robots are widely used for surveillance, guidance, transportation and entertainment tasks, as well as medical applications. This Special Issue intends to concentrate on recent developments concerning mobile robots and the research surrounding them to enhance studies on the fundamental problems observed in the robots. Various multidisciplinary approaches and integrative contributions including navigation, learning and adaptation, networked system, biologically inspired robots and cognitive methods are welcome contributions to this Special Issue, both from a research and an application perspective

    Real-time, interactive, visually updated simulator system for telepresence

    Get PDF
    Time delays and limited sensory feedback of remote telerobotic systems tend to disorient teleoperators and dramatically decrease the operator's performance. To remove the effects of time delays, key components were designed and developed of a prototype forward simulation subsystem, the Global-Local Environment Telerobotic Simulator (GLETS) that buffers the operator from the remote task. GLETS totally immerses an operator in a real-time, interactive, simulated, visually updated artificial environment of the remote telerobotic site. Using GLETS, the operator will, in effect, enter into a telerobotic virtual reality and can easily form a gestalt of the virtual 'local site' that matches the operator's normal interactions with the remote site. In addition to use in space based telerobotics, GLETS, due to its extendable architecture, can also be used in other teleoperational environments such as toxic material handling, construction, and undersea exploration

    A learning-based shared control architecture for interactive task execution

    Get PDF
    Shared control is a key technology for various robotic applications in which a robotic system and a human operator are meant to collaborate efficiently. In order to achieve efficient task execution in shared control, it is essential to predict the desired behavior for a given situation or context to simplify the control task for the human operator. To do this prediction, we use Learning from Demonstration (LfD), which is a popular approach for transferring human skills to robots. We encode the demonstrated behavior as trajectory distributions and generalize the learned distributions to new situations. The goal of this paper is to present a shared control framework that uses learned expert distributions to gain more autonomy. Our approach controls the balance between the controller’s autonomy and the human preference based on the distributions of the demonstrated trajectories. Moreover, the learned distributions are autonomously refined from collaborative task executions, resulting in a master-slave system with increasing autonomy that requires less user input with an increasing number of task executions. We experimentally validated that our shared control approach enables efficient task executions. Moreover, the conducted experiments demonstrated that the developed system improves its performances through interactive task executions with our shared control
    • 

    corecore