902 research outputs found

    Optimal task and motion planning and execution for human-robot multi-agent systems in dynamic environments

    Full text link
    Combining symbolic and geometric reasoning in multi-agent systems is a challenging task that involves planning, scheduling, and synchronization problems. Existing works overlooked the variability of task duration and geometric feasibility that is intrinsic to these systems because of the interaction between agents and the environment. We propose a combined task and motion planning approach to optimize sequencing, assignment, and execution of tasks under temporal and spatial variability. The framework relies on decoupling tasks and actions, where an action is one possible geometric realization of a symbolic task. At the task level, timeline-based planning deals with temporal constraints, duration variability, and synergic assignment of tasks. At the action level, online motion planning plans for the actual movements dealing with environmental changes. We demonstrate the approach effectiveness in a collaborative manufacturing scenario, in which a robotic arm and a human worker shall assemble a mosaic in the shortest time possible. Compared with existing works, our approach applies to a broader range of applications and reduces the execution time of the process.Comment: 12 pages, 6 figures, accepted for publication on IEEE Transactions on Cybernetics in March 202

    Learning Action Duration and Synergy in Task Planning for Human-Robot Collaboration

    Full text link
    A good estimation of the actions' cost is key in task planning for human-robot collaboration. The duration of an action depends on agents' capabilities and the correlation between actions performed simultaneously by the human and the robot. This paper proposes an approach to learning actions' costs and coupling between actions executed concurrently by humans and robots. We leverage the information from past executions to learn the average duration of each action and a synergy coefficient representing the effect of an action performed by the human on the duration of the action performed by the robot (and vice versa). We implement the proposed method in a simulated scenario where both agents can access the same area simultaneously. Safety measures require the robot to slow down when the human is close, denoting a bad synergy of tasks operating in the same area. We show that our approach can learn such bad couplings so that a task planner can leverage this information to find better plans.Comment: Accepted at IEEE Int. Conf. on Emerging Technology and Factory Automation, 202

    Human Robot Collaborative Assembly Planning: An Answer Set Programming Approach

    Full text link
    For planning an assembly of a product from a given set of parts, robots necessitate certain cognitive skills: high-level planning is needed to decide the order of actuation actions, while geometric reasoning is needed to check the feasibility of these actions. For collaborative assembly tasks with humans, robots require further cognitive capabilities, such as commonsense reasoning, sensing, and communication skills, not only to cope with the uncertainty caused by incomplete knowledge about the humans' behaviors but also to ensure safer collaborations. We propose a novel method for collaborative assembly planning under uncertainty, that utilizes hybrid conditional planning extended with commonsense reasoning and a rich set of communication actions for collaborative tasks. Our method is based on answer set programming. We show the applicability of our approach in a real-world assembly domain, where a bi-manual Baxter robot collaborates with a human teammate to assemble furniture. This manuscript is under consideration for acceptance in TPLP.Comment: 36th International Conference on Logic Programming (ICLP 2020), University Of Calabria, Rende (CS), Italy, September 2020, 15 page

    On the manipulation of articulated objects in human-robot cooperation scenarios

    Get PDF
    Articulated and flexible objects constitute a challenge for robot manipulation tasks, but are present in different real-world settings, including home and industrial environments. Approaches to the manipulation of such objects employ ad hoc strategies to sequence and perform actions on them depending on their physical or geometrical features, and on a priori target object configurations, whereas principled strategies to sequence basic manipulation actions for these objects have not been fully explored in the literature. In this paper, we propose a novel action planning and execution framework for the manipulation of articulated objects, which (i) employs action planning to sequence a set of actions leading to a target articulated object configuration, and (ii) allows humans to collaboratively carry out the plan with the robot, also interrupting its execution if needed. The framework adopts a formally defined representation of articulated objects. A link exists between the way articulated objects are perceived by the robot, how they are formally represented in the action planning and execution framework, and the complexity of the planning process. Results related to planning performance, and examples with a Baxter dualarm manipulator operating on articulated objects with humans are shown

    A Hierarchical Architecture for Flexible Human-Robot Collaboration

    Get PDF
    This thesis is devoted to design a software architecture for Human- Robot Collaboration (HRC), to enhance the robots\u2019 abilities for working alongside humans. We propose FlexHRC, a hierarchical and flexible human-robot cooperation architecture specifically designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in tasks with high-variability. Along with FlexHRC, we have introduced novel techniques appropriate for three interleaved levels, namely perception, representation, and action, each one aimed at addressing specific traits of humanrobot cooperation tasks. The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative robots could bring to the whole production process. In this context, a yet unreached enabling technology is the design of robots able to deal at all levels with humans\u2019 intrinsic variability, which is not only a necessary element to a comfortable working experience for humans but also a precious capability for efficiently dealing with unexpected events. Moreover, a flexible assembly of semi-finished products is one of the expected features of next-generation shop-floor lines. Currently, such flexibility is placed on the shoulders of human operators, who are responsible for product variability, and therefore they are subject to potentially high stress levels and cognitive load when dealing with complex operations. At the same time, operations in the shop-floor are still very structured and well-defined. Collaborative robots have been designed to allow for a transition of such burden from human operators to robots that are flexible enough to support them in high-variability tasks while they unfold. As mentioned before, FlexHRC architecture encompasses three perception, action, and representation levels. The perception level relies on wearable sensors for human action recognition and point cloud data for perceiving the object in the scene. The action level embraces four components, the robot execution manager for decoupling action planning from robot motion planning and mapping the symbolic actions to the robot controller command interface, a task Priority framework to control the robot, a differential equation solver to simulate and evaluate the robot behaviour on-the-fly, and finally a random-based method for the robot path planning. The representation level depends on AND/OR graphs for the representation of and the reasoning upon human-robot cooperation models online, a task manager to plan, adapt, and make decision for the robot behaviors, and a knowledge base in order to store the cooperation and workspace information. We evaluated the FlexHRC functionalities according to the application desired objectives. This evaluation is accompanied with several experiments, namely collaborative screwing task, coordinated transportation of the objects in cluttered environment, collaborative table assembly task, and object positioning tasks. The main contributions of this work are: (i) design and implementation of FlexHRC which enables the functional requirements necessary for the shop-floor assembly application such as task and team level flexibility, scalability, adaptability, and safety just a few to name, (ii) development of the task representation, which integrates a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic, (iii) an in-the-loop simulation-based decision making process for the operations of collaborative robots coping with the variability of human operator actions, (iv) the robot adaptation to the human on-the-fly decisions and actions via human action recognition, and (v) the predictable robot behavior to the human user thanks to the task priority based control frame, the introduced path planner, and the natural and intuitive communication of the robot with the human

    Multi-Agent Systems

    Get PDF
    A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to solve. Agent systems are open and extensible systems that allow for the deployment of autonomous and proactive software components. Multi-agent systems have been brought up and used in several application domains

    3D multi-robot patrolling with a two-level coordination strategy

    Get PDF
    Teams of UGVs patrolling harsh and complex 3D environments can experience interference and spatial conflicts with one another. Neglecting the occurrence of these events crucially hinders both soundness and reliability of a patrolling process. This work presents a distributed multi-robot patrolling technique, which uses a two-level coordination strategy to minimize and explicitly manage the occurrence of conflicts and interference. The first level guides the agents to single out exclusive target nodes on a topological map. This target selection relies on a shared idleness representation and a coordination mechanism preventing topological conflicts. The second level hosts coordination strategies based on a metric representation of space and is supported by a 3D SLAM system. Here, each robot path planner negotiates spatial conflicts by applying a multi-robot traversability function. Continuous interactions between these two levels ensure coordination and conflicts resolution. Both simulations and real-world experiments are presented to validate the performances of the proposed patrolling strategy in 3D environments. Results show this is a promising solution for managing spatial conflicts and preventing deadlocks

    Fourth Conference on Artificial Intelligence for Space Applications

    Get PDF
    Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming
    corecore