41 research outputs found

    Cognition-enabled robotic wiping: Representation, planning, execution, and interpretation

    Get PDF
    Advanced cognitive capabilities enable humans to solve even complex tasks by representing and processing internal models of manipulation actions and their effects. Consequently, humans are able to plan the effect of their motions before execution and validate the performance afterwards. In this work, we derive an analog approach for robotic wiping actions which are fundamental for some of the most frequent household chores including vacuuming the floor, sweeping dust, and cleaning windows. We describe wiping actions and their effects based on a qualitative particle distribution model. This representation enables a robot to plan goal-oriented wiping motions for the prototypical wiping actions of absorbing, collecting and skimming. The particle representation is utilized to simulate the task outcome before execution and infer the real performance afterwards based on haptic perception. This way, the robot is able to estimate the task performance and schedule additional motions if necessary. We evaluate our methods in simulated scenarios, as well as in real experiments with the humanoid service robot Rollin’ Justin

    Cognitive Reasoning for Compliant Robot Manipulation

    Get PDF
    Physically compliant contact is a major element for many tasks in everyday environments. A universal service robot that is utilized to collect leaves in a park, polish a workpiece, or clean solar panels requires the cognition and manipulation capabilities to facilitate such compliant interaction. Evolution equipped humans with advanced mental abilities to envision physical contact situations and their resulting outcome, dexterous motor skills to perform the actions accordingly, as well as a sense of quality to rate the outcome of the task. In order to achieve human-like performance, a robot must provide the necessary methods to represent, plan, execute, and interpret compliant manipulation tasks. This dissertation covers those four steps of reasoning in the concept of intelligent physical compliance. The contributions advance the capabilities of service robots by combining artificial intelligence reasoning methods and control strategies for compliant manipulation. A classification of manipulation tasks is conducted to identify the central research questions of the addressed topic. Novel representations are derived to describe the properties of physical interaction. Special attention is given to wiping tasks which are predominant in everyday environments. It is investigated how symbolic task descriptions can be translated into meaningful robot commands. A particle distribution model is used to plan goal-oriented wiping actions and predict the quality according to the anticipated result. The planned tool motions are converted into the joint space of the humanoid robot Rollin' Justin to perform the tasks in the real world. In order to execute the motions in a physically compliant fashion, a hierarchical whole-body impedance controller is integrated into the framework. The controller is automatically parameterized with respect to the requirements of the particular task. Haptic feedback is utilized to infer contact and interpret the performance semantically. Finally, the robot is able to compensate for possible disturbances as it plans additional recovery motions while effectively closing the cognitive control loop. Among others, the developed concept is applied in an actual space robotics mission, in which an astronaut aboard the International Space Station (ISS) commands Rollin' Justin to maintain a Martian solar panel farm in a mock-up environment. This application demonstrates the far-reaching impact of the proposed approach and the associated opportunities that emerge with the availability of cognition-enabled service robots

    Control strategies for cleaning robots in domestic applications: A comprehensive review:

    Get PDF
    Service robots are built and developed for various applications to support humans as companion, caretaker, or domestic support. As the number of elderly people grows, service robots will be in increasing demand. Particularly, one of the main tasks performed by elderly people, and others, is the complex task of cleaning. Therefore, cleaning tasks, such as sweeping floors, washing dishes, and wiping windows, have been developed for the domestic environment using service robots or robot manipulators with several control approaches. This article is primarily focused on control methodology used for cleaning tasks. Specifically, this work mainly discusses classical control and learning-based controlled methods. The classical control approaches, which consist of position control, force control, and impedance control , are commonly used for cleaning purposes in a highly controlled environment. However, classical control methods cannot be generalized for cluttered environment so that learning-based control methods could be an alternative solution. Learning-based control methods for cleaning tasks can encompass three approaches: learning from demonstration (LfD), supervised learning (SL), and reinforcement learning (RL). These control approaches have their own capabilities to generalize the cleaning tasks in the new environment. For example, LfD, which many research groups have used for cleaning tasks, can generate complex cleaning trajectories based on human demonstration. Also, SL can support the prediction of dirt areas and cleaning motion using large number of data set. Finally, RL can learn cleaning actions and interact with the new environment by the robot itself. In this context, this article aims to provide a general overview of robotic cleaning tasks based on different types of control methods using manipulator. It also suggest a description of the future directions of cleaning tasks based on the evaluation of the control approaches

    Adapting Everyday Manipulation Skills to Varied Scenarios

    Get PDF
    This work is partially funded by: (1) AGH University of Science and Technology, grant No 15.11.230.318. (2) Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research Center 1320, EASE. (3) Elphinstone Scholarship from University of Aberdeen.Postprin

    An optimization-based formalism for shared autonomy in dynamic environments

    Get PDF
    Teleoperation is an integral component of various industrial processes. For example, concrete spraying, assisted welding, plastering, inspection, and maintenance. Often these systems implement direct control that maps interface signals onto robot motions. Successful completion of tasks typically requires high levels of manual dexterity and cognitive load. In addition, the operator is often present nearby dangerous machinery. Consequently, safety is of critical importance and training is expensive and prolonged -- in some cases taking several months or even years. An autonomous robot replacement would be an ideal solution since the human could be removed from danger and training costs significantly reduced. However, this is currently not possible due to the complexity and unpredictability of the environments, and the levels of situational and contextual awareness required to successfully complete these tasks. In this thesis, the limitations of direct control are addressed by developing methods for shared autonomy. A shared autonomous approach combines human input with autonomy to generate optimal robot motions. The approach taken in this thesis is to formulate shared autonomy within an optimization framework that finds optimized states and controls by minimizing a cost function, modeling task objectives, given a set of (changing) physical and operational constraints. Online shared autonomy requires the human to be continuously interacting with the system via an interface (akin to direct control). The key challenges addressed in this thesis are: 1) ensuring computational feasibility (such a method should be able to find solutions fast enough to achieve a sampling frequency bound below by 40Hz), 2) being reactive to changes in the environment and operator intention, 3) knowing how to appropriately blend operator input and autonomy, and 4) allowing the operator to supply input in an intuitive manner that is conducive to high task performance. Various operator interfaces are investigated with regards to the control space, called a mode of teleoperation. Extensive evaluations were carried out to determine for which modes are most intuitive and lead to highest performance in target acquisition tasks (e.g. spraying/welding/etc). Our performance metrics quantified task difficulty based on Fitts' law, as well as a measure of how well constraints affecting the task performance were met. The experimental evaluations indicate that higher performance is achieved when humans submit commands in low-dimensional task spaces as opposed to joint space manipulations. In addition, our multivariate analysis indicated that those with regular exposure to computer games achieved higher performance. Shared autonomy aims to relieve human operators of the burden of precise motor control, tracking, and localization. An optimization-based representation for shared autonomy in dynamic environments was developed. Real-time tractability is ensured by modulating the human input with information of the changing environment within the same task space, instead of adding it to the optimization cost or constraints. The method was illustrated with two real world applications: grasping objects in cluttered environments and spraying tasks requiring sprayed linings with greater homogeneity. Maintaining motion patterns -- referred to as skills -- is often an integral part of teleoperation for various industrial processes (e.g. spraying, welding, plastering). We develop a novel model-based shared autonomous framework for incorporating the notion of skill assistance to aid operators to sustain these motion patterns whilst adhering to environment constraints. In order to achieve computational feasibility, we introduce a novel parameterization for state and control that combines skill and underlying trajectory models, leveraging a special type of curve known as Clothoids. This new parameterization allows for efficient computation of skill-based short term horizon plans, enabling the use of a model predictive control loop. Our hardware realization validates the effectiveness of our method to recognize a change of intended skill, and showing an improved quality of output motion, even under dynamically changing obstacles. In addition, extensions of the work to supervisory control are described. An exploratory study presents an approach that improves computational feasibility for complex tasks with minimal interactive effort on the part of the human. Adaptations are theorized which might allow such a method to be applicable and beneficial to high degree of freedom systems. Finally, a system developed in our lab is described that implements sliding autonomy and shown to complete multi-objective tasks in complex environments with minimal interaction from the human

    Robotic Caregivers -- Simulation and Capacitive Servoing for Physical Human-Robot Interaction

    Get PDF
    Physical human-robot interaction and robotic assistance presents an opportunity to benefit the lives of many people, including the millions of older adults and people with physical disabilities, who have difficulty performing activities of daily living (ADLs) on their own. Robotic caregiving for activities of daily living could increase the independence of people with disabilities, improve quality of life, and help address global societal issues, such as aging populations, high healthcare costs, and shortages of healthcare workers. Yet, robotic assistance presents several challenges, including risks associated with physical human-robot interaction, difficulty sensing the human body, and complexities of modeling deformable materials (e.g. clothes). We address these challenges through techniques that span the intersection of machine learning, physics simulation, sensing, and physical human-robot interaction. Haptic Perspective-taking: We first demonstrate that by enabling a robot to predict how its future actions will physically affect a person (haptic perspective-taking), robots can provide safer assistance, especially within the context of robot-assisted dressing and manipulating deformable clothes. We train a recurrent model consisting of both a temporal estimator and predictor that allows a robot to predict the forces a garment is applying onto a person using haptic measurements from the robot's end effector. By combining this predictor with model predictive control (MPC), we observe emergent behaviors that result in the robot navigating a garment up a person's entire arm. Capacitive Sensing for Tracking Human Pose: Towards the goal of robots performing robust and intelligent physical interactions with people, it is crucial that robots are able to accurately sense the human body, follow trajectories around the body, and track human motion. We have introduced a capacitive servoing control scheme that allows a robot to sense and navigate around human limbs during close physical interactions. Capacitive servoing leverages temporal measurements from a capacitive sensor mounted on a robot's end effector to estimate the relative pose of a nearby human limb. Capacitive servoing then uses these human pose estimates within a feedback control loop in order to maneuver the robot's end effector around the surface of a human limb. Through studies with human participants, we have demonstrated that these sensors can enable a robot to track human motion in real time while providing assistance with dressing and bathing. We have also shown how these sensors can benefit a robot providing dressing assistance to real people with physical disabilities. Physics Simulation for Assistive Robotics: While robotic caregivers may present an opportunity to improve the quality of life for people who require daily assistance, conducting this type of research presents several challenges, including high costs, slow data collection, and risks of physical interaction between people and robots. We have recently introduced Assistive Gym, the first open source physics-based simulation framework for modeling physical human-robot interaction and robotic assistance. We demonstrate how physics simulation can open up entirely new research directions and opportunities within physical human-robot interaction. This includes training versatile assistive robots, developing control algorithms towards common sense reasoning, constructing baselines and benchmarks for robotic caregiving, and investigating generalization of physical human-robot interaction from a number of angles, including human motion, preferences, and variation in human body shape and impairments. Finally, we show how virtual reality (VR) can help bridge the reality gap by bringing real people into physics simulation to interact with and receive assistance from virtual robotic caregivers.Ph.D

    On neuromechanical approaches for the study of biological and robotic grasp and manipulation

    Get PDF
    abstract: Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank and open-minded assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas at the interface of neuromechanics, neuroscience, rehabilitation and robotics.The electronic version of this article is the complete one and can be found online at: https://jneuroengrehab.biomedcentral.com/articles/10.1186/s12984-017-0305-
    corecore