369 research outputs found

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    Robot Learning from Demonstration in Robotic Assembly: A Survey

    Get PDF
    Learning from demonstration (LfD) has been used to help robots to implement manipulation tasks autonomously, in particular, to learn manipulation behaviors from observing the motion executed by human demonstrators. This paper reviews recent research and development in the field of LfD. The main focus is placed on how to demonstrate the example behaviors to the robot in assembly operations, and how to extract the manipulation features for robot learning and generating imitative behaviors. Diverse metrics are analyzed to evaluate the performance of robot imitation learning. Specifically, the application of LfD in robotic assembly is a focal point in this paper

    Gesture Recognition and Control for Semi-Autonomous Robotic Assistant Surgeons

    Get PDF
    The next stage for robotics development is to introduce autonomy and cooperation with human agents in tasks that require high levels of precision and/or that exert considerable physical strain. To guarantee the highest possible safety standards, the best approach is to devise a deterministic automaton that performs identically for each operation. Clearly, such approach inevitably fails to adapt itself to changing environments or different human companions. In a surgical scenario, the highest variability happens for the timing of different actions performed within the same phases. This thesis explores the solutions adopted in pursuing automation in robotic minimally-invasive surgeries (R-MIS) and presents a novel cognitive control architecture that uses a multi-modal neural network trained on a cooperative task performed by human surgeons and produces an action segmentation that provides the required timing for actions while maintaining full phase execution control via a deterministic Supervisory Controller and full execution safety by a velocity-constrained Model-Predictive Controller

    When and how to help: An iterative probabilistic model for learning assistance by demonstration

    Get PDF

    Scaled Autonomy for Networked Humanoids

    Get PDF
    Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework. The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment. Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC

    Adaptive Shared Autonomy between Human and Robot to Assist Mobile Robot Teleoperation

    Get PDF
    Die Teleoperation vom mobilen Roboter wird in großem Umfang eingesetzt, wenn es für Mensch unpraktisch oder undurchführbar ist, anwesend zu sein, aber die Entscheidung von Mensch wird dennoch verlangt. Es ist für Mensch stressig und fehleranfällig wegen Zeitverzögerung und Abwesenheit des Situationsbewusstseins, ohne Unterstützung den Roboter zu steuern einerseits, andererseits kann der völlig autonome Roboter, trotz jüngsten Errungenschaften, noch keine Aufgabe basiert auf die aktuellen Modelle der Wahrnehmung und Steuerung unabhängig ausführen. Deswegen müssen beide der Mensch und der Roboter in der Regelschleife bleiben, um gleichzeitig Intelligenz zur Durchführung von Aufgaben beizutragen. Das bedeut, dass der Mensch die Autonomie mit dem Roboter während des Betriebes zusammenhaben sollte. Allerdings besteht die Herausforderung darin, die beiden Quellen der Intelligenz vom Mensch und dem Roboter am besten zu koordinieren, um eine sichere und effiziente Aufgabenausführung in der Fernbedienung zu gewährleisten. Daher wird in dieser Arbeit eine neuartige Strategie vorgeschlagen. Sie modelliert die Benutzerabsicht als eine kontextuelle Aufgabe, um eine Aktionsprimitive zu vervollständigen, und stellt dem Bediener eine angemessene Bewegungshilfe bei der Erkennung der Aufgabe zur Verfügung. Auf diese Weise bewältigt der Roboter intelligent mit den laufenden Aufgaben auf der Grundlage der kontextuellen Informationen, entlastet die Arbeitsbelastung des Bedieners und verbessert die Aufgabenleistung. Um diese Strategie umzusetzen und die Unsicherheiten bei der Erfassung und Verarbeitung von Umgebungsinformationen und Benutzereingaben (i.e. der Kontextinformationen) zu berücksichtigen, wird ein probabilistischer Rahmen von Shared Autonomy eingeführt, um die kontextuelle Aufgabe mit Unsicherheitsmessungen zu erkennen, die der Bediener mit dem Roboter durchführt, und dem Bediener die angemesse Unterstützung der Aufgabenausführung nach diesen Messungen anzubieten. Da die Weise, wie der Bediener eine Aufgabe ausführt, implizit ist, ist es nicht trivial, das Bewegungsmuster der Aufgabenausführung manuell zu modellieren, so dass eine Reihe von der datengesteuerten Ansätzen verwendet wird, um das Muster der verschiedenen Aufgabenausführungen von menschlichen Demonstrationen abzuleiten, sich an die Bedürfnisse des Bedieners in einer intuitiven Weise über lange Zeit anzupassen. Die Praxistauglichkeit und Skalierbarkeit der vorgeschlagenen Ansätze wird durch umfangreiche Experimente sowohl in der Simulation als auch auf dem realen Roboter demonstriert. Mit den vorgeschlagenen Ansätzen kann der Bediener aktiv und angemessen unterstützt werden, indem die Kognitionsfähigkeit und Autonomieflexibilität des Roboters zu erhöhen

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics

    Generative Models for Learning Robot Manipulation Skills from Humans

    Get PDF
    A long standing goal in artificial intelligence is to make robots seamlessly interact with humans in performing everyday manipulation skills. Learning from demonstrations or imitation learning provides a promising route to bridge this gap. In contrast to direct trajectory learning from demonstrations, many problems arise in interactive robotic applications that require higher contextual level understanding of the environment. This requires learning invariant mappings in the demonstrations that can generalize across different environmental situations such as size, position, orientation of objects, viewpoint of the observer, etc. In this thesis, we address this challenge by encapsulating invariant patterns in the demonstrations using probabilistic learning models for acquiring dexterous manipulation skills. We learn the joint probability density function of the demonstrations with a hidden semi-Markov model, and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The model exploits the invariant segments (also termed as sub-goals, options or actions) in the demonstrations and adapts the movement in accordance with the external environmental situations such as size, position and orientation of the objects in the environment using a task-parameterized formulation. We incorporate high-dimensional sensory data for skill acquisition by parsimoniously representing the demonstrations using statistical subspace clustering methods and exploit the coordination patterns in latent space. To adapt the models on the fly and/or teach new manipulation skills online with the streaming data, we formulate a non-parametric scalable online sequence clustering algorithm with Bayesian non-parametric mixture models to avoid the model selection problem while ensuring tractability under small variance asymptotics. We exploit the developed generative models to perform manipulation skills with remotely operated vehicles over satellite communication in the presence of communication delays and limited bandwidth. A set of task-parameterized generative models are learned from the demonstrations of different manipulation skills provided by the teleoperator. The model captures the intention of teleoperator on one hand and provides assistance in performing remote manipulation tasks on the other hand under varying environmental situations. The assistance is formulated under time-independent shared control, where the model continuously corrects the remote arm movement based on the current state of the teleoperator; and/or time-dependent autonomous control, where the model synthesizes the movement of the remote arm for autonomous skill execution. Using the proposed methodology with the two-armed Baxter robot as a mock-up for semi-autonomous teleoperation, we are able to learn manipulation skills such as opening a valve, pick-and-place an object by obstacle avoidance, hot-stabbing (a specialized underwater task akin to peg-in-a-hole task), screw-driver target snapping, and tracking a carabiner in as few as 4 - 8 demonstrations. Our study shows that the proposed manipulation assistance formulations improve the performance of the teleoperator by reducing the task errors and the execution time, while catering for the environmental differences in performing remote manipulation tasks with limited bandwidth and communication delays

    Adaptive physical human-robot interaction (PHRI) with a robotic nursing assistant.

    Get PDF
    Recently, more and more robots are being investigated for future applications in health-care. For instance, in nursing assistance, seamless Human-Robot Interaction (HRI) is very important for sharing workspaces and workloads between medical staff, patients, and robots. In this thesis we introduce a novel robot - the Adaptive Robot Nursing Assistant (ARNA) and its underlying components. ARNA has been designed specifically to assist nurses with day-to-day tasks such as walking patients, pick-and-place item retrieval, and routine patient health monitoring. An adaptive HRI in nursing applications creates a positive user experience, increase nurse productivity and task completion rates, as reported by experimentation with human subjects. ARNA has been designed to include interface devices such as tablets, force sensors, pressure-sensitive robot skins, LIDAR and RGBD camera. These interfaces are combined with adaptive controllers and estimators within a proposed framework that contains multiple innovations. A research study was conducted on methods of deploying an ideal HumanMachine Interface (HMI), in this case a tablet-based interface. Initial study points to the fact that a traded control level of autonomy is ideal for tele-operating ARNA by a patient. The proposed method of using the HMI devices makes the performance of a robot similar for both skilled and un-skilled workers. A neuro-adaptive controller (NAC), which contains several neural-networks to estimate and compensate for system non-linearities, was implemented on the ARNA robot. By linearizing the system, a cross-over usability condition is met through which humans find it more intuitive to learn to use the robot in any location of its workspace, A novel Base-Sensor Assisted Physical Interaction (BAPI) controller is introduced in this thesis, which utilizes a force-torque sensor at the base of the ARNA robot manipulator to detect full body collisions, and make interaction safer. Finally, a human-intent estimator (HIE) is proposed to estimate human intent while the robot and user are physically collaborating during certain tasks such as adaptive walking. A NAC with HIE module was validated on a PR2 robot through user studies. Its implementation on the ARNA robot platform can be easily accomplished as the controller is model-free and can learn robot dynamics online. A new framework, Directive Observer and Lead Assistant (DOLA), is proposed for ARNA which enables the user to interact with the robot in two modes: physically, by direct push-guiding, and remotely, through a tablet interface. In both cases, the human is being “observed” by the robot, then guided and/or advised during interaction. If the user has trouble completing the given tasks, the robot adapts their repertoire to lead users toward completing goals. The proposed framework incorporates interface devices as well as adaptive control systems in order to facilitate a higher performance interaction between the user and the robot than was previously possible. The ARNA robot was deployed and tested in a hospital environment at the School of Nursing of the University of Louisville. The user-experience tests were conducted with the help of healthcare professionals where several metrics including completion time, rate and level of user satisfaction were collected to shed light on the performance of various components of the proposed framework. The results indicate an overall positive response towards the use of such assistive robot in the healthcare environment. The analysis of these gathered data is included in this document. To summarize, this research study makes the following contributions: Conducting user experience studies with the ARNA robot in patient sitter and walker scenarios to evaluate both physical and non-physical human-machine interfaces. Evaluation and Validation of Human Intent Estimator (HIE) and Neuro-Adaptive Controller (NAC). Proposing the novel Base-Sensor Assisted Physical Interaction (BAPI) controller. Building simulation models for packaged tactile sensors and validating the models with experimental data. Description of Directive Observer and Lead Assistance (DOLA) framework for ARNA using adaptive interfaces
    corecore