16,315 research outputs found

    Representation and control of coordinated-motion tasks for human-robot systems

    Get PDF
    It is challenging for robots to perform various tasks in a human environment. This is because many human-centered tasks require coordination in both hands and may often involve cooperation with another human. Although human-centered tasks require different types of coordinated movements, most of the existing methodologies have focused only on specific types of coordination. This thesis aims at the description and control of coordinated-motion tasks for human-robot systems; i.e., humanoid robots as well as multi-robot and human-robot systems. First, for bimanually coordinated-motion tasks in dual-manipulator systems, we propose the Extended-Cooperative-Task-Space (ECTS) representation, which extends the existing Cooperative-Task-Space (CTS) representation based on the kinematic models for human bimanual movements in Biomechanics. The proposed ECTS representation can represent the whole spectrum of dual-arm motion/force coordination using two sets of ECTS motion/force variables in a unified manner. The type of coordination can be easily chosen by two meaningful coefficients, and during coordinated-motion tasks, each set of variables directly describes two different aspects of coordinated motion and force behaviors. Thus, the operator can specify coordinated-motion/force tasks more intuitively in high-level descriptions, and the specified tasks can be easily reused in other situations with greater flexibility. Moreover, we present consistent procedures of using the ECTS representation for task specifications in the upper-body and lower-body subsystems of humanoid robots in order to perform manipulation and locomotion tasks, respectively. Besides, we propose and discuss performance indices derived based on the ECTS representation, which can be used to evaluate and optimize the performance of any type of dual-arm manipulation tasks. We show that using the ECTS representation for specifying both dual-arm manipulation and biped locomotion tasks can greatly simplify the motion planning process, allowing the operator to focus on high-level descriptions of those tasks. Both upper-body and lower-body task specifications are demonstrated by specifying whole-body task examples on a Hubo II+ robot carrying out dual-arm manipulation as well as biped locomotion tasks in a simulation environment. We also present the results from experiments on a dual-arm robot (Baxter) for teleoperating various types of coordinated-motion tasks using a single 6D mouse interface. The specified upper- and lower-body tasks can be considered as coordinated motions with constraints. In order to express various constraints imposed across the whole-body, we discuss the modeling of whole-body structure and the computations for robotic systems having multiple kinematic chains. Then we present a whole-body controller formulated as a quadratic programming, which can take different types of constraints into account in a prioritized manner. We validate the whole-body controller based on the simulation results on a Hubo II+ robot performing specified whole-body task examples with a number of motion and force constraints as well as actuation limits. Lastly, we discuss an extension of the ECTS representation, called Hierarchical Extended-Cooperative-Task Space (H-ECTS) framework, which uses tree-structured graphical representations for coordinated-motion tasks of multi-robot and human-robot systems. The H-ECTS framework is validated by experimental results on two Baxter robots cooperating with each other as well as with an additional human partner

    Clustering-Based Robot Navigation and Control

    Get PDF
    In robotics, it is essential to model and understand the topologies of configuration spaces in order to design provably correct motion planners. The common practice in motion planning for modelling configuration spaces requires either a global, explicit representation of a configuration space in terms of standard geometric and topological models, or an asymptotically dense collection of sample configurations connected by simple paths, capturing the connectivity of the underlying space. This dissertation introduces the use of clustering for closing the gap between these two complementary approaches. Traditionally an unsupervised learning method, clustering offers automated tools to discover hidden intrinsic structures in generally complex-shaped and high-dimensional configuration spaces of robotic systems. We demonstrate some potential applications of such clustering tools to the problem of feedback motion planning and control. The first part of the dissertation presents the use of hierarchical clustering for relaxed, deterministic coordination and control of multiple robots. We reinterpret this classical method for unsupervised learning as an abstract formalism for identifying and representing spatially cohesive and segregated robot groups at different resolutions, by relating the continuous space of configurations to the combinatorial space of trees. Based on this new abstraction and a careful topological characterization of the associated hierarchical structure, a provably correct, computationally efficient hierarchical navigation framework is proposed for collision-free coordinated motion design towards a designated multirobot configuration via a sequence of hierarchy-preserving local controllers. The second part of the dissertation introduces a new, robot-centric application of Voronoi diagrams to identify a collision-free neighborhood of a robot configuration that captures the local geometric structure of a configuration space around the robot’s instantaneous position. Based on robot-centric Voronoi diagrams, a provably correct, collision-free coverage and congestion control algorithm is proposed for distributed mobile sensing applications of heterogeneous disk-shaped robots; and a sensor-based reactive navigation algorithm is proposed for exact navigation of a disk-shaped robot in forest-like cluttered environments. These results strongly suggest that clustering is, indeed, an effective approach for automatically extracting intrinsic structures in configuration spaces and that it might play a key role in the design of computationally efficient, provably correct motion planners in complex, high-dimensional configuration spaces

    Coordination of several robots based on temporal synchronization

    Get PDF
    © 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper proposes an approach to deal with the problem of coordinating multi-robot systems, in which each robot executes individually planned tasks in a shared workspace. The approach is a decoupled method that can coordinate the participating robots in on-line mode. The coordination is achieved through the adjustment of the time evolution of each robot along its original planned geometric path according to the movements of the other robots to assure a collision-free execution of their respective tasks. To assess the proposed approach different tests were performed in graphical simulations and real experiments.Postprint (published version

    Finding a needle in an exponential haystack: Discrete RRT for exploration of implicit roadmaps in multi-robot motion planning

    Full text link
    We present a sampling-based framework for multi-robot motion planning which combines an implicit representation of a roadmap with a novel approach for pathfinding in geometrically embedded graphs tailored for our setting. Our pathfinding algorithm, discrete-RRT (dRRT), is an adaptation of the celebrated RRT algorithm for the discrete case of a graph, and it enables a rapid exploration of the high-dimensional configuration space by carefully walking through an implicit representation of a tensor product of roadmaps for the individual robots. We demonstrate our approach experimentally on scenarios of up to 60 degrees of freedom where our algorithm is faster by a factor of at least ten when compared to existing algorithms that we are aware of.Comment: Kiril Solovey and Oren Salzman contributed equally to this pape

    Quantifying the Evolutionary Self Structuring of Embodied Cognitive Networks

    Full text link
    We outline a possible theoretical framework for the quantitative modeling of networked embodied cognitive systems. We notice that: 1) information self structuring through sensory-motor coordination does not deterministically occur in Rn vector space, a generic multivariable space, but in SE(3), the group structure of the possible motions of a body in space; 2) it happens in a stochastic open ended environment. These observations may simplify, at the price of a certain abstraction, the modeling and the design of self organization processes based on the maximization of some informational measures, such as mutual information. Furthermore, by providing closed form or computationally lighter algorithms, it may significantly reduce the computational burden of their implementation. We propose a modeling framework which aims to give new tools for the design of networks of new artificial self organizing, embodied and intelligent agents and the reverse engineering of natural ones. At this point, it represents much a theoretical conjecture and it has still to be experimentally verified whether this model will be useful in practice.
    corecore