445 research outputs found

    Collaborative Bimanual Manipulation Using Optimal Motion Adaptation and Interaction Control Retargetting Human Commands to Feasible Robot Control References

    Get PDF
    This article presents a robust and reliable human–robot collaboration (HRC) framework for bimanual manipulation. We propose an optimal motion adaptation method to retarget arbitrary human commands to feasible robot pose references while maintaining payload stability. The framework comprises three modules: 1) a task-space sequential equilibrium and inverse kinematics optimization ( task-space SEIKO ) for retargeting human commands and enforcing feasibility constraints, 2) an admittance controller to facilitate compliant human–robot physical interactions, and 3) a low-level controller improving stability during physical interactions. Experimental results show that the proposed framework successfully adapted infeasible and dangerous human commands into continuous motions within safe boundaries and achieved stable grasping and maneuvering of large and heavy objects on a real dual-arm robot via teleoperation and physical interaction. Furthermore, the framework demonstrated the capability in the assembly task of building blocks and the insertion task of industrial power connectors

    Learning Task Priorities from Demonstrations

    Full text link
    Bimanual operations in humanoids offer the possibility to carry out more than one manipulation task at the same time, which in turn introduces the problem of task prioritization. We address this problem from a learning from demonstration perspective, by extending the Task-Parameterized Gaussian Mixture Model (TP-GMM) to Jacobian and null space structures. The proposed approach is tested on bimanual skills but can be applied in any scenario where the prioritization between potentially conflicting tasks needs to be learned. We evaluate the proposed framework in: two different tasks with humanoids requiring the learning of priorities and a loco-manipulation scenario, showing that the approach can be exploited to learn the prioritization of multiple tasks in parallel.Comment: Accepted for publication at the IEEE Transactions on Robotic

    Cable-driven parallel mechanisms for minimally invasive robotic surgery

    Get PDF
    Minimally invasive surgery (MIS) has revolutionised surgery by providing faster recovery times, less post-operative complications, improved cosmesis and reduced pain for the patient. Surgical robotics are used to further decrease the invasiveness of procedures, by using yet smaller and fewer incisions or using natural orifices as entry point. However, many robotic systems still suffer from technical challenges such as sufficient instrument dexterity and payloads, leading to limited adoption in clinical practice. Cable-driven parallel mechanisms (CDPMs) have unique properties, which can be used to overcome existing challenges in surgical robotics. These beneficial properties include high end-effector payloads, efficient force transmission and a large configurable instrument workspace. However, the use of CDPMs in MIS is largely unexplored. This research presents the first structured exploration of CDPMs for MIS and demonstrates the potential of this type of mechanism through the development of multiple prototypes: the ESD CYCLOPS, CDAQS, SIMPLE, neuroCYCLOPS and microCYCLOPS. One key challenge for MIS is the access method used to introduce CDPMs into the body. Three different access methods are presented by the prototypes. By focusing on the minimally invasive access method in which CDPMs are introduced into the body, the thesis provides a framework, which can be used by researchers, engineers and clinicians to identify future opportunities of CDPMs in MIS. Additionally, through user studies and pre-clinical studies, these prototypes demonstrate that this type of mechanism has several key advantages for surgical applications in which haptic feedback, safe automation or a high payload are required. These advantages, combined with the different access methods, demonstrate that CDPMs can have a key role in the advancement of MIS technology.Open Acces

    Muscleless Motor synergies and actions without movements : From Motor neuroscience to cognitive robotics

    Get PDF
    Emerging trends in neurosciences are providing converging evidence that cortical networks in predominantly motor areas are activated in several contexts related to ‘action’ that do not cause any overt movement. Indeed for any complex body, human or embodied robot inhabiting unstructured environments, the dual processes of shaping motor output during action execution and providing the self with information related to feasibility, consequence and understanding of potential actions (of oneself/others) must seamlessly alternate during goal-oriented behaviors, social interactions. While prominent approaches like Optimal Control, Active Inference converge on the role of forward models, they diverge on the underlying computational basis. In this context, revisiting older ideas from motor control like the Equilibrium Point Hypothesis and synergy formation, this article offers an alternative perspective emphasizing the functional role of a ‘plastic, configurable’ internal representation of the body (body-schema) as a critical link enabling the seamless continuum between motor control and imagery. With the central proposition that both “real and imagined” actions are consequences of an internal simulation process achieved though passive goal-oriented animation of the body schema, the computational/neural basis of muscleless motor synergies (and ensuing simulated actions without movements) is explored. The rationale behind this perspective is articulated in the context of several interdisciplinary studies in motor neurosciences (for example, intracranial depth recordings from the parietal cortex, FMRI studies highlighting a shared cortical basis for action ‘execution, imagination and understanding’), animal cognition (in particular, tool-use and neuro-rehabilitation experiments, revealing how coordinated tools are incorporated as an extension to the body schema) and pertinent challenges towards building cognitive robots that can seamlessly “act, interact, anticipate and understand” in unstructured natural living spaces

    Representation and control of coordinated-motion tasks for human-robot systems

    Get PDF
    It is challenging for robots to perform various tasks in a human environment. This is because many human-centered tasks require coordination in both hands and may often involve cooperation with another human. Although human-centered tasks require different types of coordinated movements, most of the existing methodologies have focused only on specific types of coordination. This thesis aims at the description and control of coordinated-motion tasks for human-robot systems; i.e., humanoid robots as well as multi-robot and human-robot systems. First, for bimanually coordinated-motion tasks in dual-manipulator systems, we propose the Extended-Cooperative-Task-Space (ECTS) representation, which extends the existing Cooperative-Task-Space (CTS) representation based on the kinematic models for human bimanual movements in Biomechanics. The proposed ECTS representation can represent the whole spectrum of dual-arm motion/force coordination using two sets of ECTS motion/force variables in a unified manner. The type of coordination can be easily chosen by two meaningful coefficients, and during coordinated-motion tasks, each set of variables directly describes two different aspects of coordinated motion and force behaviors. Thus, the operator can specify coordinated-motion/force tasks more intuitively in high-level descriptions, and the specified tasks can be easily reused in other situations with greater flexibility. Moreover, we present consistent procedures of using the ECTS representation for task specifications in the upper-body and lower-body subsystems of humanoid robots in order to perform manipulation and locomotion tasks, respectively. Besides, we propose and discuss performance indices derived based on the ECTS representation, which can be used to evaluate and optimize the performance of any type of dual-arm manipulation tasks. We show that using the ECTS representation for specifying both dual-arm manipulation and biped locomotion tasks can greatly simplify the motion planning process, allowing the operator to focus on high-level descriptions of those tasks. Both upper-body and lower-body task specifications are demonstrated by specifying whole-body task examples on a Hubo II+ robot carrying out dual-arm manipulation as well as biped locomotion tasks in a simulation environment. We also present the results from experiments on a dual-arm robot (Baxter) for teleoperating various types of coordinated-motion tasks using a single 6D mouse interface. The specified upper- and lower-body tasks can be considered as coordinated motions with constraints. In order to express various constraints imposed across the whole-body, we discuss the modeling of whole-body structure and the computations for robotic systems having multiple kinematic chains. Then we present a whole-body controller formulated as a quadratic programming, which can take different types of constraints into account in a prioritized manner. We validate the whole-body controller based on the simulation results on a Hubo II+ robot performing specified whole-body task examples with a number of motion and force constraints as well as actuation limits. Lastly, we discuss an extension of the ECTS representation, called Hierarchical Extended-Cooperative-Task Space (H-ECTS) framework, which uses tree-structured graphical representations for coordinated-motion tasks of multi-robot and human-robot systems. The H-ECTS framework is validated by experimental results on two Baxter robots cooperating with each other as well as with an additional human partner

    Manipulation Planning for Forceful Human-Robot-Collaboration

    Get PDF
    This thesis addresses the problem of manipulation planning for forceful human-robot collaboration. Particularly, the focus is on the scenario where a human applies a sequence of changing external forces through forceful operations (e.g. cutting a circular piece off a board) on an object that is grasped by a cooperative robot. We present a range of planners that 1) enable the robot to stabilize and position the object under the human applied forces by exploiting supports from both the object-robot and object-environment contacts; 2) improve task efficiency by minimizing the need of configuration and grasp changes required by the changing external forces; 3) improve human comfort during the forceful interaction by optimizing the defined comfort criteria. We first focus on the instance of using only robotic grasps, where the robot is supposed to grasp/regrasp the object multiple times to keep it stable under the changing external forces. We introduce a planner that can generate an efficient manipulation plan by intelligently deciding when the robot should change its grasp on the object as the human applies the forces, and choosing subsequent grasps such that they minimize the number of regrasps required in the long-term. The planner searches for such an efficient plan by first finding a minimal sequence of grasp configurations that are able to keep the object stable under the changing forces, and then generating connecting trajectories to switch between the planned configurations, i.e. planning regrasps. We perform the search for such a grasp (configuration) sequence by sampling stable configurations for the external forces, building an operation graph using these stable configurations and then searching the operation graph to minimize the number of regrasps. We solve the problem of bimanual regrasp planning under the assumption of no support surface, enabling the robot to regrasp an object in the air by finding intermediate configurations at which both the bimanual and unimanual grasps can hold the object stable under gravity. We present a variety of experiments to show the performance of our planner, particularly in minimizing the number of regrasps for forceful manipulation tasks and planning stable regrasps. We then explore the problem of using both the object-environment contacts and object-robot contacts, which enlarges the set of stable configurations and thus boosts the robot’s capability in stabilizing the object under external forces. We present a planner that can intelligently exploit the environment’s and robot’s stabilization capabilities within a unified planning framework to search for a minimal number of stable contact configurations. A big computational bottleneck in this planner is due to the static stability analysis of a large number of candidate configurations. We introduce a containment relation between different contact configurations, to efficiently prune the stability checking process. We present a set of real-robot and simulated experiments illustrating the effectiveness of the proposed framework. We present a detailed analysis of the proposed containment relationship, particularly in improving the planning efficiency. We present a planning algorithm to further improve the cooperative robot behaviour concerning human comfort during the forceful human-robot interaction. Particularly, we are interested in empowering the robot with the capability of grasping and positioning the object not only to ensure the object stability against the human applied forces, but also to improve human experience and comfort during the interaction. We address human comfort as the muscular activation level required to apply a desired external force, together with the human spatial perception, i.e. the so-called peripersonal-space comfort during the interaction. We propose to maximize both comfort metrics to optimize the robot and object configuration such that the human can apply a forceful operation comfortably. We present a set of human-robot drilling and cutting experiments which verify the efficiency of the proposed metrics in improving the overall comfort and HRI experience, without compromising the force stability. In addition to the above planning work, we present a conic formulation to approximate the distribution of a forceful operation in the wrench space with a polyhedral cone, which enables the planner to efficiently assess the stability of a system configuration even in the presence of force uncertainties that are inherent in the human applied forceful operations. We also develop a graphical user interface, which human users can easily use to specify various forceful tasks, i.e. sequences of forceful operations on selected objects, in an interactive manner. The user interface ties in human task specification, on-demand manipulation planning and robot-assisted fabrication together. We present a set of human-robot experiments using the interface demonstrating the feasibility of our system. In short, in this thesis we present a series of planners for object manipulation under changing external forces. We show the object contacts with the robot and the environment enable the robot to manipulate an object under external forces, while making the most of the object contacts has the potential to eliminate redundant changes during manipulation, e.g. regrasp, and thus improve task efficiency and smoothness. We also show the necessity of optimizing human comfort in planning for forceful human-robot manipulation tasks. We believe the work presented here can be a key component in a human-robot collaboration framework

    Motion planning using synergies : application to anthropomorphic dual-arm robots

    Get PDF
    Motion planning is a traditional field in robotics, but new problems are nevertheless incessantly appearing, due to continuous advances in the robot developments. In order to solve these new problems, as well as to improve the existing solutions to classical problems, new approaches are being proposed. A paradigmatic case is the humanoid robotics, since the advances done in this field require motion planners not only to look efficiently for an optimal solution in the classic way, i.e. optimizing consumed energy or time in the plan execution, but also looking for human-like solutions, i.e. requiring the robot movements to be similar to those of the human beings. This anthropomorphism in the robot motion is desired not only for aesthetical reasons, but it is also needed to allow a better and safer human-robot collaboration: humans can predict more easily anthropomorphic robot motions thus avoiding collisions and enhancing the collaboration with the robot. Nevertheless, obtaining a satisfactory performance of these anthropomorphic robotic systems requires the automatic planning of the movements, which is still an arduous and non-evident task since the complexity of the planning problem increases exponentially with the number of degrees of freedom of the robotic system. This doctoral thesis tackles the problem of planning the motions of dual-arm anthropomorphic robots (optionally with mobile base). The main objective is twofold: obtaining robot motions both in an efficient and in a human-like fashion at the same time. Trying to mimic the human movements while reducing the complexity of the search space for planning purposes leads to the concept of synergies, which could be conceptually defined as correlations (in the joint configuration space as well as in the joint velocity space) between the degrees of freedom of the system. This work proposes new sampling-based motion-planning procedures that exploit the concept of synergies, both in the configuration and velocity space, coordinating the movements of the arms, the hands and the mobile base of mobile anthropomorphic dual-arm robots.La planificación de movimientos es un campo tradicional de la robótica, sin embargo aparecen incesantemente nuevos problemas debido a los continuos avances en el desarrollo de los robots. Para resolver esos nuevos problemas, así como para mejorar las soluciones existentes a los problemas clásicos, se están proponiendo nuevos enfoques. Un caso paradigmático es la robótica humanoide, ya que los avances realizados en este campo requieren que los algoritmos planificadores de movimientos no sólo encuentren eficientemente una solución óptima en el sentido clásico, es decir, optimizar el consumo de energía o el tiempo de ejecución de la trayectoria; sino que también busquen soluciones con apariencia humana, es decir, que el movimiento del robot sea similar al del ser humano. Este antropomorfismo en el movimiento del robot se busca no sólo por razones estéticas, sino porque también es necesario para permitir una colaboración mejor y más segura entre el robot y el operario: el ser humano puede predecir con mayor facilidad los movimientos del robot si éstos son antropomórficos, evitando así las colisiones y mejorando la colaboración humano robot. Sin embargo, para obtener un desempeño satisfactorio de estos sistemas robóticos antropomórficos se requiere una planificación automática de sus movimientos, lo que sigue siendo una tarea ardua y poco evidente, ya que la complejidad del problema aumenta exponencialmente con el número de grados de libertad del sistema robótico. Esta tesis doctoral aborda el problema de la planificación de movimientos en robots antropomorfos bibrazo (opcionalmente con base móvil). El objetivo aquí es doble: obtener movimientos robóticos de forma eficiente y, a la vez, que tengan apariencia humana. Intentar imitar los movimientos humanos mientras a la vez se reduce la complejidad del espacio de búsqueda conduce al concepto de sinergias, que podrían definirse conceptualmente como correlaciones (tanto en el espacio de configuraciones como en el espacio de velocidades de las articulaciones) entre los distintos grados de libertad del sistema. Este trabajo propone nuevos procedimientos de planificación de movimientos que explotan el concepto de sinergias, tanto en el espacio de configuraciones como en el espacio de velocidades, coordinando así los movimientos de los brazos, las manos y la base móvil de robots móviles, bibrazo y antropomórficos.Postprint (published version
    corecore