17,494 research outputs found

    A distributed optimization framework for localization and formation control: applications to vision-based measurements

    Full text link
    Multiagent systems have been a major area of research for the last 15 years. This interest has been motivated by tasks that can be executed more rapidly in a collaborative manner or that are nearly impossible to carry out otherwise. To be effective, the agents need to have the notion of a common goal shared by the entire network (for instance, a desired formation) and individual control laws to realize the goal. The common goal is typically centralized, in the sense that it involves the state of all the agents at the same time. On the other hand, it is often desirable to have individual control laws that are distributed, in the sense that the desired action of an agent depends only on the measurements and states available at the node and at a small number of neighbors. This is an attractive quality because it implies an overall system that is modular and intrinsically more robust to communication delays and node failures

    Cooperative Decentralized Multi-agent Control under Local LTL Tasks and Connectivity Constraints

    Full text link
    We propose a framework for the decentralized control of a team of agents that are assigned local tasks expressed as Linear Temporal Logic (LTL) formulas. Each local LTL task specification captures both the requirements on the respective agent's behavior and the requests for the other agents' collaborations needed to accomplish the task. Furthermore, the agents are subject to communication constraints. The presented solution follows the automata-theoretic approach to LTL model checking, however, it avoids the computationally demanding construction of synchronized product system between the agents. We suggest a decentralized coordination among the agents through a dynamic leader-follower scheme, to guarantee the low-level connectivity maintenance at all times and a progress towards the satisfaction of the leader's task. By a systematic leader switching, we ensure that each agent's task will be accomplished.Comment: full version of CDC 2014 submissio

    Gaussian-Process-based Robot Learning from Demonstration

    Full text link
    Endowed with higher levels of autonomy, robots are required to perform increasingly complex manipulation tasks. Learning from demonstration is arising as a promising paradigm for transferring skills to robots. It allows to implicitly learn task constraints from observing the motion executed by a human teacher, which can enable adaptive behavior. We present a novel Gaussian-Process-based learning from demonstration approach. This probabilistic representation allows to generalize over multiple demonstrations, and encode variability along the different phases of the task. In this paper, we address how Gaussian Processes can be used to effectively learn a policy from trajectories in task space. We also present a method to efficiently adapt the policy to fulfill new requirements, and to modulate the robot behavior as a function of task variability. This approach is illustrated through a real-world application using the TIAGo robot.Comment: 8 pages, 10 figure
    • …
    corecore