6 research outputs found

    Bearing-Based Formation Maneuvering

    Full text link
    This paper studies the problem of multi-agent formation maneuver control where both of the centroid and scale of a formation are required to track given velocity references while maintaining the formation shape. Unlike the conventional approaches where the target formation is defined by inter-neighbor relative positions or distances, we propose a bearing-based approach where the target formation is defined by inter-neighbor bearings. Due to the invariance of the bearings, the bearing-based approach provides a natural solution to formation scale control. We assume the dynamics of each agent as a single integrator and propose a globally stable proportional-integral formation maneuver control law. It is shown that at least two leaders are required to collaborate in order to control the centroid and scale of the formation whereas the followers are not required to have access to any global information, such as the velocities of the leaders.Comment: To appear in the 2015 IEEE Multi-Conference on Systems and Control (MSC2015); this is the final versio

    Bearing-based formation control with second-order agent dynamics

    Full text link
    We consider the distributed formation control problem for a network of agents using visual measurements. We propose solutions that are based on bearing (and optionally distance) measurements, and agents with double integrator dynamics. We assume that a subset of the agents can track, in addition to their neighbors, a set of static features in the environment. These features are not considered to be part of the formation, but they are used to asymptotically control the velocity of the agents. We analyze the convergence properties of the proposed protocols analytically and through simulations.Published versionSupporting documentatio

    Translational and Scaling Formation Maneuver Control via a Bearing-Based Approach

    Get PDF
    This paper studies distributed maneuver control of multi-agent formations in arbitrary dimensions. The objective is to control the translation and scale of the formation while maintaining the desired formation pattern. Unlike conventional approaches where the target formation is defined by relative positions or distances, we propose a novel bearing-based approach where the target formation is defined by inter-neighbor bearings. Since the bearings are invariant to the translation and scale of the formation, the bearing-based approach provides a simple solution to the problem of translational and scaling formation maneuver control. Linear formation control laws for double-integrator dynamics are proposed and the global formation stability is analyzed. This paper also studies bearing-based formation control in the presence of practical problems including input disturbances, acceleration saturation, and collision avoidance. The theoretical results are illustrated with numerical simulations

    Bearing rigidity theory and its applications for control and estimation of network systems: Life beyond distance rigidity

    Get PDF
    Distributed control and location estimation of multiagent systems have received tremendous research attention in recent years because of their potential across many application domains [1], [2]. The term agent can represent a sensor, autonomous vehicle, or any general dynamical system. Multiagent systems are attractive because of their robustness against system failure, ability to adapt to dynamic and uncertain environments, and economic advantages compared to the implementation of more expensive monolithic systems

    Resilient visual perception for multiagent systems

    Full text link
    There has been an increasing interest in visual sensors and vision-based solutions for single and multi-robot systems. Vision-based sensors, e.g., traditional RGB cameras, grant rich semantic information and accurate directional measurements at a relatively low cost; however, such sensors have two major drawbacks. They do not generally provide reliable depth estimates, and typically have a limited field of view. These limitations considerably increase the complexity of controlling multiagent systems. This thesis studies some of the underlying problems in vision-based multiagent control and mapping. The first contribution of this thesis is a method for restoring bearing rigidity in non-rigid networks of robots. We introduce means to determine which bearing measurements can improve bearing rigidity in non-rigid graphs and provide a greedy algorithm that restores rigidity in 2D with a minimum number of added edges. The focus of the second part is on the formation control problem using only bearing measurements. We address the control problem for consensus and formation control through non-smooth Lyapunov functions and differential inclusion. We provide a stability analysis for undirected graphs and investigate the derived controllers for directed graphs. We also introduce a newer notion of bearing persistence for pure bearing-based control in directed graphs. The third part is concerned with the bearing-only visual homing problem with a limited field of view sensor. In essence, this problem is a special case of the formation control problem where there is a single moving agent with fixed neighbors. We introduce a navigational vector field composed of two orthogonal vector fields that converges to the goal position and does not violate the field of view constraints. Our method does not require the landmarks' locations and is robust to the landmarks' tracking loss. The last part of this dissertation considers outlier detection in pose graphs for Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM) problems. We propose a method for detecting incorrect orientation measurements before pose graph optimization by checking their geometric consistency in cycles. We use Expectation-Maximization to fine-tune the noise's distribution parameters and propose a new approximate graph inference procedure specifically designed to take advantage of evidence on cycles with better performance than standard approaches. These works will help enable multi-robot systems to overcome visual sensors' limitations in collaborative tasks such as navigation and mapping
    corecore