3,185 research outputs found

    A distributed optimization framework for localization and formation control: applications to vision-based measurements

    Full text link
    Multiagent systems have been a major area of research for the last 15 years. This interest has been motivated by tasks that can be executed more rapidly in a collaborative manner or that are nearly impossible to carry out otherwise. To be effective, the agents need to have the notion of a common goal shared by the entire network (for instance, a desired formation) and individual control laws to realize the goal. The common goal is typically centralized, in the sense that it involves the state of all the agents at the same time. On the other hand, it is often desirable to have individual control laws that are distributed, in the sense that the desired action of an agent depends only on the measurements and states available at the node and at a small number of neighbors. This is an attractive quality because it implies an overall system that is modular and intrinsically more robust to communication delays and node failures

    Technical Report: Cooperative Multi-Target Localization With Noisy Sensors

    Full text link
    This technical report is an extended version of the paper 'Cooperative Multi-Target Localization With Noisy Sensors' accepted to the 2013 IEEE International Conference on Robotics and Automation (ICRA). This paper addresses the task of searching for an unknown number of static targets within a known obstacle map using a team of mobile robots equipped with noisy, limited field-of-view sensors. Such sensors may fail to detect a subset of the visible targets or return false positive detections. These measurement sets are used to localize the targets using the Probability Hypothesis Density, or PHD, filter. Robots communicate with each other on a local peer-to-peer basis and with a server or the cloud via access points, exchanging measurements and poses to update their belief about the targets and plan future actions. The server provides a mechanism to collect and synthesize information from all robots and to share the global, albeit time-delayed, belief state to robots near access points. We design a decentralized control scheme that exploits this communication architecture and the PHD representation of the belief state. Specifically, robots move to maximize mutual information between the target set and measurements, both self-collected and those available by accessing the server, balancing local exploration with sharing knowledge across the team. Furthermore, robots coordinate their actions with other robots exploring the same local region of the environment.Comment: Extended version of paper accepted to 2013 IEEE International Conference on Robotics and Automation (ICRA

    Information Acquisition with Sensing Robots: Algorithms and Error Bounds

    Full text link
    Utilizing the capabilities of configurable sensing systems requires addressing difficult information gathering problems. Near-optimal approaches exist for sensing systems without internal states. However, when it comes to optimizing the trajectories of mobile sensors the solutions are often greedy and rarely provide performance guarantees. Notably, under linear Gaussian assumptions, the problem becomes deterministic and can be solved off-line. Approaches based on submodularity have been applied by ignoring the sensor dynamics and greedily selecting informative locations in the environment. This paper presents a non-greedy algorithm with suboptimality guarantees, which does not rely on submodularity and takes the sensor dynamics into account. Our method performs provably better than the widely used greedy one. Coupled with linearization and model predictive control, it can be used to generate adaptive policies for mobile sensors with non-linear sensing models. Applications in gas concentration mapping and target tracking are presented.Comment: 9 pages (two-column); 2 figures; Manuscript submitted to the 2014 IEEE International Conference on Robotics and Automatio

    Increasing the Efficiency of 6-DoF Visual Localization Using Multi-Modal Sensory Data

    Full text link
    Localization is a key requirement for mobile robot autonomy and human-robot interaction. Vision-based localization is accurate and flexible, however, it incurs a high computational burden which limits its application on many resource-constrained platforms. In this paper, we address the problem of performing real-time localization in large-scale 3D point cloud maps of ever-growing size. While most systems using multi-modal information reduce localization time by employing side-channel information in a coarse manner (eg. WiFi for a rough prior position estimate), we propose to inter-weave the map with rich sensory data. This multi-modal approach achieves two key goals simultaneously. First, it enables us to harness additional sensory data to localise against a map covering a vast area in real-time; and secondly, it also allows us to roughly localise devices which are not equipped with a camera. The key to our approach is a localization policy based on a sequential Monte Carlo estimator. The localiser uses this policy to attempt point-matching only in nodes where it is likely to succeed, significantly increasing the efficiency of the localization process. The proposed multi-modal localization system is evaluated extensively in a large museum building. The results show that our multi-modal approach not only increases the localization accuracy but significantly reduces computational time.Comment: Presented at IEEE-RAS International Conference on Humanoid Robots (Humanoids) 201
    • …
    corecore