248 research outputs found

    Towards a self-collision aware teleoperation framework for compound robots

    Get PDF
    This work lays the foundations of a self-collision aware teleoperation framework for compound robots. The need of an haptic enabled system which guarantees self-collision and joint limits avoidance for complex robots is the main motivation behind this paper. The objective of the proposed system is to constrain the user to teleoperate a slave robot inside its safe workspace region through the application of force cues on the master side of the bilateral teleoperation system. A series of simulated experiments have been performed on the Kuka KMRiiwa mobile robot; however, due to its generality, the framework is prone to be easily extended to other robots. The experiments have shown the applicability of the proposed approach to ordinary teleoperation systems without altering their stability properties. The benefits introduced by this framework enable the user to safely teleoperate whichever complex robotic system without worrying about self-collision and joint limitations

    Haptic-Based Shared-Control Methods for a Dual-Arm System

    Get PDF
    We propose novel haptic guidance methods for a dual-arm telerobotic manipulation system, which are able to deal with several different constraints, such as collisions, joint limits, and singularities. We combine the haptic guidance with shared-control algorithms for autonomous orientation control and collision avoidance meant to further simplify the execution of grasping tasks. The stability of the overall system in various control modalities is presented and analyzed via passivity arguments. In addition, a human subject study is carried out to assess the effectiveness and applicability of the proposed control approaches both in simulated and real scenarios. Results show that the proposed haptic-enabled shared-control methods significantly improve the performance of grasping tasks with respect to the use of classic teleoperation with neither haptic guidance nor shared control

    Contact aware robust semi-autonomous teleoperation of mobile manipulators

    Get PDF
    In the context of human-robot collaboration, cooperation and teaming, the use of mobile manipulators is widespread on applications involving unpredictable or hazardous environments for humans operators, like space operations, waste management and search and rescue on disaster scenarios. Applications where the manipulator's motion is controlled remotely by specialized operators. Teleoperation of manipulators is not a straightforward task, and in many practical cases represent a common source of failures. Common issues during the remote control of manipulators are: increasing control complexity with respect the mechanical degrees of freedom; inadequate or incomplete feedback to the user (i.e. limited visualization or knowledge of the environment); predefined motion directives may be incompatible with constraints or obstacles imposed by the environment. In the latter case, part of the manipulator may get trapped or blocked by some obstacle in the environment, failure that cannot be easily detected, isolated nor counteracted remotely. While control complexity can be reduced by the introduction of motion directives or by abstraction of the robot motion, the real-time constraint of the teleoperation task requires the transfer of the least possible amount of data over the system's network, thus limiting the number of physical sensors that can be used to model the environment. Therefore, it is of fundamental to define alternative perceptive strategies to accurately characterize different interaction with the environment without relying on specific sensory technologies. In this work, we present a novel approach for safe teleoperation, that takes advantage of model based proprioceptive measurement of the robot dynamics to robustly identify unexpected collisions or contact events with the environment. Each identified collision is translated on-the-fly into a set of local motion constraints, allowing the exploitation of the system redundancies for the computation of intelligent control laws for automatic reaction, without requiring human intervention and minimizing the disturbance of the task execution (or, equivalently, the operator efforts). More precisely, the described system consist in two different building blocks. The first, for detecting unexpected interactions with the environment (perceptive block). The second, for intelligent and autonomous reaction after the stimulus (control block). The perceptive block is responsible of the contact event identification. In short, the approach is based on the claim that a sensorless collision detection method for robot manipulators can be extended to the field of mobile manipulators, by embedding it within a statistical learning framework. The control deals with the intelligent and autonomous reaction after the contact or impact with the environment occurs, and consist on an motion abstraction controller with a prioritized set of constrains, where the highest priority correspond to the robot reconfiguration after a collision is detected; when all related dynamical effects have been compensated, the controller switch again to the basic control mode

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Robotite halduri alamsüsteemi väljatöötamine tarkvararaamistikule TEMOTO

    Get PDF
    Robots provide an opportunity to spare humans from tasks that are repetitive, require high precision or involve hazardous environments. Robots are often composed of multiple robotic units, such as mobile manipulators that integrate object manipulation and traversal capabilities. Additionally, a group of robots, i.e., multi robot systems, can be utilized for solving a common goal. However, the more elements are added to the system, the more complicated it is to control it. TeMoto is a ROS package intended for developing human-robot collaboration and multi-robot applications where TeMoto Robot Manager (TRM), a subsystem of TeMoto, is designed to unify the control of main robotic components: manipulators, mobile bases and grippers. However the implementation of TRM was incomplete prior to this work, having no functionality for controlling mobile bases and grippers. This thesis extends the functionality of TeMoto Robot Manager by implementing the aforementioned missing features, thus facilitating the integration of compound robots and multi-robot systems. The outcome of this work is demonstrated in an object transportation scenario incorporating a heterogeneous multi-robot system that consists of two manipulators, two grippers, and a mobile base. In estonian: Robotid võimaldavad aidata inimesi ülesannetes mis on eluohtlikud, nõuavad suurt täpsust või on üksluised. Üks terviklik robot koosneb tihtipeale mitme eri funktsionaalsusega alamrobotist, millest näiteks mobiilne manipulaator on kombinatsioon mobiilsest platvormist ja objektide manipuleerimise võimekusega robotist. Roboteid saab rakendada ülesannete lahendamisel ka mitme roboti süsteemina, kuid robotite hulga suurenemisel suureneb ka nende haldamise keerukus. TeMoto on ROSi kimp, mis hõlbustab inimene-robot koostöö ja mitme roboti süsteemide arendamist. Robotite haldur on TeMoto alamsüsteem, mis aitab käsitleda mobiilseid platvorme, manipulaatoreid ja haaratseid ühtse tervikliku robotina. Käesolevale tööle eelnevalt puudus Robotite halduril mobiilsete platvormide ja haaratsite haldamise võimekused, mille väljatöötamine oli antud töö peamiseks eesmärgiks. Töö tulemusena valmis TeMoto Robotite halduri terviklik lahendus, mille funktsionaalsust demonstreeriti objekti transportimise ülesande lahendamisel, kaasates kahest manipulaatorist, kahest haaratsist ja mobiilsest platvormist koosnevat heterogeenset mitme roboti süsteemi

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Advanced teleoperation and control system for industrial robots based on augmented virtuality and haptic feedback

    Get PDF
    There are some industrial tasks that are still mainly performed manually by human workers due to their complexity, which is the case of surface treatment operations (such as sanding, deburring, finishing, grinding, polishing, etc.) used to repair defects. This work develops an advanced teleoperation and control system for industrial robots in order to assist the human operator to perform the mentioned tasks. On the one hand, the controlled robotic system provides strength and accuracy, holding the tool, keeping the right tool orientation and guaranteeing a smooth approach to the workpiece. On the other hand, the advanced teleoperation provides security and comfort to the user when performing the task. In particular, the proposed teleoperation uses augmented virtuality (i.e., a virtual world that includes non-modeled real-world data) and haptic feedback to provide the user an immersive virtual experience when remotely teleoperating the tool of the robot system to treat arbitrary regions of the workpiece surface. The method is illustrated with a car body surface treatment operation, although it can be easily extended to other surface treatment applications or even to other industrial tasks where the human operator may benefit from robotic assistance. The effectiveness of the proposed approach is shown with several experiments using a 6R robotic arm. Moreover, a comparison of the performance obtained manually by an expert and that obtained with the proposed method has also been conducted in order to show the suitability of the proposed approach

    Combining a hierarchical task network planner with a constraint satisfaction solver for assembly operations involving routing problems in a multi-robot context

    Get PDF
    This work addresses the combination of a symbolic hierarchical task network planner and a constraint satisfaction solver for the vehicle routing problem in a multi-robot context for structure assembly operations. Each planner has its own problem domain and search space, and the article describes how both planners interact in a loop sharing information in order to improve the cost of the solutions. The vehicle routing problem solver gives an initial assignment of parts to robots, making the distribution based on the distance among parts and robots, trying also to maximize the parallelism of the future assembly operations evaluating during the process the dependencies among the parts assigned to each robot. Then, the hierarchical task network planner computes a scheduling for the given assignment and estimates the cost in terms of time spent on the structure assembly. This cost value is then given back to the vehicle routing problem solver as feedback to compute a better assignment, closing the loop and repeating again the whole process. This interaction scheme has been tested with different constraint satisfaction solvers for the vehicle routing problem. The article presents simulation results in a scenario with a team of aerial robots assembling a structure, comparing the results obtained with different configurations of the vehicle routing problem solver and showing the suitability of using this approach.Unión Europea ARCAS FP7-ICT-287617Unión Europea H2020-ICT-644271Unión europea H2020-ICT-73166

    Nonterrestrial utilization of materials: Automated space manufacturing facility

    Get PDF
    Four areas related to the nonterrestrial use of materials are included: (1) material resources needed for feedstock in an orbital manufacturing facility, (2) required initial components of a nonterrestrial manufacturing facility, (3) growth and productive capability of such a facility, and (4) automation and robotics requirements of the facility

    Distributed Dynamic Hierarchical Task Assignment for Human-Robot Teams

    Get PDF
    This work implements a joint task architecture for human-robot collaborative task execution using a hierarchical task planner. This architecture allowed humans and robots to work together as teammates in the same environment while following several task constraints. These constraints are 1) sequential order, 2) non-sequential, and 3) alternative execution constraints. Both the robot and the human are aware of each other's current state and allocate their next task based on the task tree. On-table tasks, such as setting up a tea table or playing a color sequence matching game, validate the task architecture. The robot will have an updated task representation of its human teammate's task. Using this knowledge, it is also able to continuously detect the human teammate's intention towards each sub-task and coordinate it with the teammate. While performing a joint task, there can be situations in which tasks overlap or do not overlap. We designed a dialogue-based conversation between humans and robots to resolve conflict in the case of overlapping tasks.Evaluating the human-robot task architecture is the next concern after validating the task architecture. Trust and trustworthiness are some of the most critical metrics to explore. A study was conducted between humans and robots to create a homophily situation. Homophily means when a person feels biased towards another person because of having similarities in social ways. We conducted this study to determine whether humans can form a homophilic relationship with robots and whether there is a connection between homophily and trust. We found a correlation between homophily and trust in human-robot interactions.Furthermore, we designed a pipeline by which the robot learns a task by observing the human teammate's hand movement while conversing. The robot then constructs the tree by itself using a GA learning framework. Thus removing the need for manual specification by a programmer each time to revise or update the task tree which makes the architecture more flexible, realistic, efficient, and dynamic. Additionally, our architecture allows the robot to comprehend the context of a situation by conversing with a human teammate and observing the surroundings. The robot can find a link between the context of the situation and the surrounding objects by using the ontology approach and can perform the desired task accordingly. Therefore, we proposed a human-robot distributed joint task management architecture that addresses design, improvement, and evaluation under multiple constraints
    corecore