19 research outputs found

    Manipulation monitoring and robot intervention in complex manipulation sequences

    Get PDF
    Compared to machines, humans are intelligent and dexterous; they are indispensable for many complex tasks in areas such as flexible manufacturing or scientific experimentation. However, they are also subject to fatigue and inattention, which may cause errors. This motivates automated monitoring systems that verify the correct execution of manipulation sequences. To be practical, such a monitoring system should not require laborious programming.Peer ReviewedPostprint (author's final draft

    A survey of robot manipulation in contact

    Get PDF
    In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of (1) performing tasks that always require contact and (2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks

    Peg-on-Hole: Fundamental Principles of Motion of a Peg Leaning on a Horizontal Hole Edge

    Full text link

    Learning relational models with human interaction for planning in robotics

    Get PDF
    Automated planning has proven to be useful to solve problems where an agent has to maximize a reward function by executing actions. As planners have been improved to salve more expressive and difficult problems, there is an increasing interest in using planning to improve efficiency in robotic tasks. However, planners rely on a domain model, which has to be either handcrafted or learned. Although learning domain models can be very costly, recent approaches provide generalization capabilities and integrate human feedback to reduce the amount of experiences required to learn. In this thesis we propase new methods that allow an agent with no previous knowledge to solve certain problems more efficiently by using task planning. First, we show how to apply probabilistic planning to improve robot performance in manipulation tasks (such as cleaning the dirt or clearing the tableware on a table). Planners obtain sequences of actions that get the best result in the long term, beating reactive strategies. Second, we introduce new reinforcement learning algorithms where the agent can actively request demonstrations from a teacher to learn new actions and speed up the learning process. In particular, we propase an algorithm that allows the user to set the mínimum quality to be achieved, where a better quality also implies that a larger number of demonstrations will be requested . Moreover, the learned model is analyzed to extract the unlearned or problematic parts of the model. This information allow the agent to provide guidance to the teacher when a demonstration is requested, and to avoid irrecoverable errors. Finally, a new domain model learner is introduced that, in addition to relational probabilistic action models, can also learn exogenous effects. This learner can be integrated with existing planners and reinforcement learning algorithms to salve a wide range of problems. In summary, we improve the use of learning and task planning to salve unknown tasks. The improvements allow an agent to obtain a larger benefit from planners, learn faster, balance the number of action executions and teacher demonstrations, avoid irrecoverable errors, interact with a teacher to solve difficult problems, and adapt to the behavior of other agents by learning their dynamics. All the proposed methods were compared with state-of-the-art approaches, and were also demonstrated in different scenarios, including challenging robotic tasks.La planificación automática ha probado ser de gran utilidad para resolver problemas en los que un agente tiene que ejecutar acciones para maximizar una función de recompensa. A medida que los planificadores han sido capaces de resolver problemas cada vez más complejos, ha habido un creciente interés por utilizar dichos planificadores para mejorar la eficiencia de tareas robóticas. Sin embargo, los planificadores requieren un modelo del dominio, el cual puede ser creado a mano o aprendido. Aunque aprender modelos automáticamente puede ser costoso, recientemente han aparecido métodos que permiten la interacción persona-máquina y generalizan el conocimiento para reducir la cantidad de experiencias requeridas para aprender. En esta tesis proponemos nuevos métodos que permiten a un agente sin conocimiento previo de la tarea resolver problemas de forma más eficiente mediante el uso de planificación automática. Comenzaremos mostrando cómo aplicar planificación probabilística para mejorar la eficiencia de robots en tareas de manipulación (como limpiar suciedad o recoger una mesa). Los planificadores son capaces de obtener las secuencias de acciones que producen los mejores resultados a largo plazo, superando a las estrategias reactivas. Por otro lado, presentamos nuevos algoritmos de aprendizaje por refuerzo en los que el agente puede solicitar demostraciones a un profesor. Dichas demostraciones permiten al agente acelerar el aprendizaje o aprender nuevas acciones. En particular, proponemos un algoritmo que permite al usuario establecer la mínima suma de recompensas que es aceptable obtener, donde una recompensa más alta implica que se requerirán más demostraciones. Además, el modelo aprendido será analizado para identificar qué partes están incompletas o son problemáticas. Esta información permitirá al agente evitar errores irrecuperables y también guiar al profesor cuando se solicite una demostración. Finalmente, se ha introducido un nuevo método de aprendizaje para modelos de dominios que, además de obtener modelos relacionales de acciones probabilísticas, también puede aprender efectos exógenos. Mostraremos cómo integrar este método en algoritmos de aprendizaje por refuerzo para poder abordar una mayor cantidad de problemas. En resumen, hemos mejorado el uso de técnicas de aprendizaje y planificación para resolver tareas desconocidas a priori. Estas mejoras permiten a un agente aprovechar mejor los planificadores, aprender más rápido, elegir entre reducir el número de acciones ejecutadas o el número de demostraciones solicitadas, evitar errores irrecuperables, interactuar con un profesor para resolver problemas complejos, y adaptarse al comportamiento de otros agentes aprendiendo sus dinámicas. Todos los métodos propuestos han sido comparados con trabajos del estado del arte, y han sido evaluados en distintos escenarios, incluyendo tareas robóticas

    Relational reinforcement learning with guided demonstrations

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0Model-based reinforcement learning is a powerful paradigm for learning tasks in robotics. However, in-depth exploration is usually required and the actions have to be known in advance. Thus, we propose a novel algorithm that integrates the option of requesting teacher demonstrations to learn new domains with fewer action executions and no previous knowledge. Demonstrations allow new actions to be learned and they greatly reduce the amount of exploration required, but they are only requested when they are expected to yield a significant improvement because the teacher's time is considered to be more valuable than the robot's time. Moreover, selecting the appropriate action to demonstrate is not an easy task, and thus some guidance is provided to the teacher. The rule-based model is analyzed to determine the parts of the state that may be incomplete, and to provide the teacher with a set of possible problems for which a demonstration is needed. Rule analysis is also used to find better alternative models and to complete subgoals before requesting help, thereby minimizing the number of requested demonstrations. These improvements were demonstrated in a set of experiments, which included domains from the international planning competition and a robotic task. Adding teacher demonstrations and rule analysis reduced the amount of exploration required by up to 60% in some domains, and improved the success ratio by 35% in other domainsPeer ReviewedPostprint (author's final draft

    Cognitive Reasoning for Compliant Robot Manipulation

    Get PDF
    Physically compliant contact is a major element for many tasks in everyday environments. A universal service robot that is utilized to collect leaves in a park, polish a workpiece, or clean solar panels requires the cognition and manipulation capabilities to facilitate such compliant interaction. Evolution equipped humans with advanced mental abilities to envision physical contact situations and their resulting outcome, dexterous motor skills to perform the actions accordingly, as well as a sense of quality to rate the outcome of the task. In order to achieve human-like performance, a robot must provide the necessary methods to represent, plan, execute, and interpret compliant manipulation tasks. This dissertation covers those four steps of reasoning in the concept of intelligent physical compliance. The contributions advance the capabilities of service robots by combining artificial intelligence reasoning methods and control strategies for compliant manipulation. A classification of manipulation tasks is conducted to identify the central research questions of the addressed topic. Novel representations are derived to describe the properties of physical interaction. Special attention is given to wiping tasks which are predominant in everyday environments. It is investigated how symbolic task descriptions can be translated into meaningful robot commands. A particle distribution model is used to plan goal-oriented wiping actions and predict the quality according to the anticipated result. The planned tool motions are converted into the joint space of the humanoid robot Rollin' Justin to perform the tasks in the real world. In order to execute the motions in a physically compliant fashion, a hierarchical whole-body impedance controller is integrated into the framework. The controller is automatically parameterized with respect to the requirements of the particular task. Haptic feedback is utilized to infer contact and interpret the performance semantically. Finally, the robot is able to compensate for possible disturbances as it plans additional recovery motions while effectively closing the cognitive control loop. Among others, the developed concept is applied in an actual space robotics mission, in which an astronaut aboard the International Space Station (ISS) commands Rollin' Justin to maintain a Martian solar panel farm in a mock-up environment. This application demonstrates the far-reaching impact of the proposed approach and the associated opportunities that emerge with the availability of cognition-enabled service robots

    Human-robot interaction using a behavioural control strategy

    Get PDF
    PhD ThesisA topical and important aspect of robotics research is in the area of human-robot interaction (HRI), which addresses the issue of cooperation between a human and a robot to allow tasks to be shared in a safe and reliable manner. This thesis focuses on the design and development of an appropriate set of behaviour strategies for human-robot interactive control by first understanding how an equivalent human-human interaction (HHI) can be used to establish a framework for a robotic behaviour-based approach. To achieve the above goal, two preliminary HHI experimental investigations were initiated in this study. The first of which was designed to evaluate the human dynamic response using a one degree-of-freedom (DOF) HHI rectilinear test where the handler passes a compliant object to the receiver along a constrained horizontal path. The human dynamic response while executing the HHI rectilinear task has been investigated using a Box-Behnken design of experiments [Box and Hunter, 1957] and was based on the McRuer crossover model [McRuer et al. 1995]. To mimic a real-world human-human object handover task where the handler is able to pass an object to the receiver in a 3D workspace, a second more substantive one DOF HHI baton handover task has been developed. The HHI object handover tests were designed to understand the dynamic behavioural characteristics of the human participants, in which the handler was required to dexterously pass an object to the receiver in a timely and natural manner. The profiles of interactive forces between the handler and receiver were measured as a function of time, and how they are modulated whilst performing the tasks, was evaluated. Three key parameters were used to identify the physical characteristics of the human participants, including: peak interactive force (fmax), transfer time (Ttrf), and work done (W). These variables were subsequently used to design and develop an appropriate set of force and velocity control strategies for a six DOF Stäubli robot manipulator arm (TX60) working in a human-robot interactive environment. The optimal design of the software and hardware controller implementation for the robot system has been successfully established in keeping with a behaviour-based approach. External force control based on proportional plus integral (PI) and fuzzy logic control (FLC) algorithms were adopted to control the robot end effector velocity and interactive force in real-time. ii The results of interactive experiments with human-to-robot and robot-to-human handover tasks allowed a comparison of the PI and FLC control strategies. It can be concluded that the quantitative measurement of the performance of robot velocity and force control can be considered acceptable for human-robot interaction. These can provide effective performance during the robot-human object handover tasks, where the robot was able to successfully pass the object from/to the human in a safe, reliable and timely manner. However, after careful analysis with regard to human-robot handover test results, the FLC scheme was shown to be superior to PI control by actively compensating for the dynamics in the non-linear system and demonstrated better overall performance and stability. The FLC also shows superior performance in terms of improved sensitivity to small error changes compared to PI control, which is an advantage in establishing effective robot force control. The results of survey responses from the participants were in agreement with the parallel test outcomes, demonstrating significant satisfaction with the overall performance of the human-robot interactive system, as measured by an average rating of 4.06 on a five point scale. In brief, this research has contributed the foundations for long-term research, particularly in the development of an interactive real-time robot-force control system, which enables the robot manipulator arm to cooperate with a human to facilitate the dextrous transfer of objects in a safe and speedy manner.Thai government and Prince of Songkla University (PSU

    Human-Robot Collaborations in Industrial Automation

    Get PDF
    Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations

    Proceedings of the 9th international conference on disability, virtual reality and associated technologies (ICDVRAT 2012)

    Get PDF
    The proceedings of the conferenc
    corecore