1,390 research outputs found

    Distributed Robotic Vision for Calibration, Localisation, and Mapping

    Get PDF
    This dissertation explores distributed algorithms for calibration, localisation, and mapping in the context of a multi-robot network equipped with cameras and onboard processing, comparing against centralised alternatives where all data is transmitted to a singular external node on which processing occurs. With the rise of large-scale camera networks, and as low-cost on-board processing becomes increasingly feasible in robotics networks, distributed algorithms are becoming important for robustness and scalability. Standard solutions to multi-camera computer vision require the data from all nodes to be processed at a central node which represents a significant single point of failure and incurs infeasible communication costs. Distributed solutions solve these issues by spreading the work over the entire network, operating only on local calculations and direct communication with nearby neighbours. This research considers a framework for a distributed robotic vision platform for calibration, localisation, mapping tasks where three main stages are identified: an initialisation stage where calibration and localisation are performed in a distributed manner, a local tracking stage where visual odometry is performed without inter-robot communication, and a global mapping stage where global alignment and optimisation strategies are applied. In consideration of this framework, this research investigates how algorithms can be developed to produce fundamentally distributed solutions, designed to minimise computational complexity whilst maintaining excellent performance, and designed to operate effectively in the long term. Therefore, three primary objectives are sought aligning with these three stages

    Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas

    Get PDF
    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted

    Dynamic Occupancy Grid Prediction for Urban Autonomous Driving: A Deep Learning Approach with Fully Automatic Labeling

    Full text link
    Long-term situation prediction plays a crucial role in the development of intelligent vehicles. A major challenge still to overcome is the prediction of complex downtown scenarios with multiple road users, e.g., pedestrians, bikes, and motor vehicles, interacting with each other. This contribution tackles this challenge by combining a Bayesian filtering technique for environment representation, and machine learning as long-term predictor. More specifically, a dynamic occupancy grid map is utilized as input to a deep convolutional neural network. This yields the advantage of using spatially distributed velocity estimates from a single time step for prediction, rather than a raw data sequence, alleviating common problems dealing with input time series of multiple sensors. Furthermore, convolutional neural networks have the inherent characteristic of using context information, enabling the implicit modeling of road user interaction. Pixel-wise balancing is applied in the loss function counteracting the extreme imbalance between static and dynamic cells. One of the major advantages is the unsupervised learning character due to fully automatic label generation. The presented algorithm is trained and evaluated on multiple hours of recorded sensor data and compared to Monte-Carlo simulation

    A distributed optimization framework for localization and formation control: applications to vision-based measurements

    Full text link
    Multiagent systems have been a major area of research for the last 15 years. This interest has been motivated by tasks that can be executed more rapidly in a collaborative manner or that are nearly impossible to carry out otherwise. To be effective, the agents need to have the notion of a common goal shared by the entire network (for instance, a desired formation) and individual control laws to realize the goal. The common goal is typically centralized, in the sense that it involves the state of all the agents at the same time. On the other hand, it is often desirable to have individual control laws that are distributed, in the sense that the desired action of an agent depends only on the measurements and states available at the node and at a small number of neighbors. This is an attractive quality because it implies an overall system that is modular and intrinsically more robust to communication delays and node failures

    Localization and Optimization Problems for Camera Networks

    Get PDF
    In the framework of networked control systems, we focus on networks of autonomous PTZ cameras. A large set of cameras communicating each other through a network is a widely used architecture in application areas like video surveillance, tracking and motion. First, we consider relative localization in sensor networks, and we tackle the issue of investigating the error propagation, in terms of the mean error on each component of the optimal estimator of the position vector. The relative error is computed as a function of the eigenvalues of the network: using this formula and focusing on an exemplary class of networks (the Abelian Cayley networks), we study the role of the network topology and the dimension of the networks in the error characterization. Second, in a network of cameras one of the most crucial problems is calibration. For each camera this consists in understanding what is its position and orientation with respect to a global common reference frame. Well-known methods in computer vision permit to obtain relative positions and orientations of pairs of cameras whose sensing regions overlap. The aim is to propose an algorithm that, from these noisy input data makes the cameras complete the calibration task autonomously, in a distributed fashion. We focus on the planar case, formulating an optimization problem over the manifold SO(2). We propose synchronous deterministic and distributed algorithms that calibrate planar networks exploiting the cycle structure of the underlying communication graph. Performance analysis and numerical experiments are shown. Third, we propose a gossip-like randomized calibration algorithm, whose probabilistic convergence and numerical studies are provided. Forth and finally, we design surveillance trajectories for a network of calibrated autonomous cameras to detect intruders in an environment, through a continuous graph partitioning problem

    Latent parameter estimation in fusion networks using separable likelihoods

    Get PDF
    Multi-sensor state space models underpin fusion applications in networks of sensors. Estimation of latent parameters in these models has the potential to provide highly desirable capabilities such as network self-calibration. Conventional solutions to the problem pose difficulties in scaling with the number of sensors due to the joint multi-sensor filtering involved when evaluating the parameter likelihood. In this article, we propose a separable pseudo-likelihood which is a more accurate approximation compared to a previously proposed alternative under typical operating conditions. In addition, we consider using separable likelihoods in the presence of many objects and ambiguity in associating measurements with objects that originated them. To this end, we use a state space model with a hypothesis based parameterisation, and, develop an empirical Bayesian perspective in order to evaluate separable likelihoods on this model using local filtering. Bayesian inference with this likelihood is carried out using belief propagation on the associated pairwise Markov random field. We specify a particle algorithm for latent parameter estimation in a linear Gaussian state space model and demonstrate its efficacy for network self-calibration using measurements from non-cooperative targets in comparison with alternatives.Comment: accepted with minor revisions, IEEE Transactions on Signal and Information Processing Over Network

    Online Mapping and Perception Algorithms for Multi-robot Teams Operating in Urban Environments.

    Full text link
    This thesis investigates some of the sensing and perception challenges faced by multi-robot teams equipped with LIDAR and camera sensors. Multi-robot teams are ideal for deployment in large, real-world environments due to their ability to parallelize exploration, reconnaissance or mapping tasks. However, such domains also impose additional requirements, including the need for a) online algorithms (to eliminate stopping and waiting for processing to finish before proceeding) and b) scalability (to handle data from many robots distributed over a large area). These general requirements give rise to specific algorithmic challenges, including 1) online maintenance of large, coherent maps covering the explored area, 2) online estimation of communication properties in the presence of buildings and other interfering structure, and 3) online fusion and segmentation of multiple sensors to aid in object detection. The contribution of this thesis is the introduction of novel approaches that leverage grid-maps and sparse multi-variate gaussian inference to augment the capability of multi-robot teams operating in urban, indoor-outdoor environments by improving the state of the art of map rasterization, signal strength prediction, colored point cloud segmentation, and reliable camera calibration. In particular, we introduce a map rasterization technique for large LIDAR-based occupancy grids that makes online updates possible when data is arriving from many robots at once. We also introduce new online techniques for robots to predict the signal strength to their teammates by combining LIDAR measurements with signal strength measurements from their radios. Processing fused LIDAR+camera point clouds is also important for many object-detection pipelines. We demonstrate a near linear-time online segmentation algorithm to this domain. However, maintaining the calibration of a fleet of 14 robots made this approach difficult to employ in practice. Therefore we introduced a robust and repeatable camera calibration process that grounds the camera model uncertainty in pixel error, allowing the system to guide novices and experts alike to reliably produce accurate calibrations.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113516/1/jhstrom_1.pd
    corecore