3,401 research outputs found

    Distributed Robotic Vision for Calibration, Localisation, and Mapping

    Get PDF
    This dissertation explores distributed algorithms for calibration, localisation, and mapping in the context of a multi-robot network equipped with cameras and onboard processing, comparing against centralised alternatives where all data is transmitted to a singular external node on which processing occurs. With the rise of large-scale camera networks, and as low-cost on-board processing becomes increasingly feasible in robotics networks, distributed algorithms are becoming important for robustness and scalability. Standard solutions to multi-camera computer vision require the data from all nodes to be processed at a central node which represents a significant single point of failure and incurs infeasible communication costs. Distributed solutions solve these issues by spreading the work over the entire network, operating only on local calculations and direct communication with nearby neighbours. This research considers a framework for a distributed robotic vision platform for calibration, localisation, mapping tasks where three main stages are identified: an initialisation stage where calibration and localisation are performed in a distributed manner, a local tracking stage where visual odometry is performed without inter-robot communication, and a global mapping stage where global alignment and optimisation strategies are applied. In consideration of this framework, this research investigates how algorithms can be developed to produce fundamentally distributed solutions, designed to minimise computational complexity whilst maintaining excellent performance, and designed to operate effectively in the long term. Therefore, three primary objectives are sought aligning with these three stages

    Distributed hybrid unit quaternion localisation of camera networks

    Get PDF
    openSeveral dynamical systems evolve on angular type of variables, such as the pose of rigid bodies or optimization techniques applied to variables of unitary norms. Perhaps the most suitable mathematical tool for describing such dynamics corresponds to the n-dimensional sphere, that is the manifold of dimension n embedded in the (n+1) dimensional Euclidean space and corresponding to all the vectors having unit norm. A relevant example corresponds to the 3-sphere and the ensuing quaternion-based coordinate system, which is largely used for describing the pose of rigid bodies. One of the challenges in describing dynamics evolving on the n-dimensional sphere is the fact that global robust stabilization of a point cannot be accomplished with continuous feedback laws. It is then necessary to resort to alternative solutions, for wanting robustness of the closed-loop stability properties. Hybrid dynamical systems are a possible answer to this, where existing works on the distributed calibration of camera networks will be first overviewed, and hybrid solutions will be proposed and tested.Several dynamical systems evolve on angular type of variables, such as the pose of rigid bodies or optimization techniques applied to variables of unitary norms. Perhaps the most suitable mathematical tool for describing such dynamics corresponds to the n-dimensional sphere, that is the manifold of dimension n embedded in the (n+1) dimensional Euclidean space and corresponding to all the vectors having unit norm. A relevant example corresponds to the 3-sphere and the ensuing quaternion-based coordinate system, which is largely used for describing the pose of rigid bodies. One of the challenges in describing dynamics evolving on the n-dimensional sphere is the fact that global robust stabilization of a point cannot be accomplished with continuous feedback laws. It is then necessary to resort to alternative solutions, for wanting robustness of the closed-loop stability properties. Hybrid dynamical systems are a possible answer to this, where existing works on the distributed calibration of camera networks will be first overviewed, and hybrid solutions will be proposed and tested

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Dual-sensor fusion for indoor user localisation

    Get PDF
    In this paper we address the automatic identification of in- door locations using a combination of WLAN and image sensing. Our motivation is the increasing prevalence of wear- able cameras, some of which can also capture WLAN data. We propose to use image-based and WLAN-based localisa- tion individually and then fuse the results to obtain better performance overall. We demonstrate the effectiveness of our fusion algorithm for localisation to within a 8.9m2 room on very challenging data both for WLAN and image-based algorithms. We envisage the potential usefulness of our ap- proach in a range of ambient assisted living applications

    Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings

    Full text link
    Conventional feature-based and model-based gaze estimation methods have proven to perform well in settings with controlled illumination and specialized cameras. In unconstrained real-world settings, however, such methods are surpassed by recent appearance-based methods due to difficulties in modeling factors such as illumination changes and other visual artifacts. We present a novel learning-based method for eye region landmark localization that enables conventional methods to be competitive to latest appearance-based methods. Despite having been trained exclusively on synthetic data, our method exceeds the state of the art for iris localization and eye shape registration on real-world imagery. We then use the detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods. Our approach outperforms existing model-fitting and appearance-based methods in the context of person-independent and personalized gaze estimation

    Audio Fingerprinting for Multi-Device Self-Localization

    Get PDF
    This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/K007491/1

    Mapping and Merging Using Sound and Vision : Automatic Calibration and Map Fusion with Statistical Deformations

    Get PDF
    Over the last couple of years both cameras, audio and radio sensors have become cheaper and more common in our everyday lives. Such sensors can be used to create maps of where the sensors are positioned and the appearance of the surroundings. For sound and radio, the process of estimating the sender and receiver positions from time of arrival (TOA) or time-difference of arrival (TDOA) measurements is referred to as automatic calibration. The corresponding process for images is to estimate the camera positions as well as the positions of the objects captured in the images. This is called structure from motion (SfM) or visual simultaneous localisation and mapping (SLAM). In this thesis we present studies on how to create such maps, divided into three parts: to find accurate measurements; robust mapping; and merging of maps.The first part is treated in Paper I and involves finding precise – on a subsample level – TDOA measurements. These types of subsample refinements give a high precision, but are sensitive to noise. We present an explicit expression for the variance of the TDOA estimate and study the impact that noise in the signals has. Exact measurements is an important foundation for creating accurate maps. The second part of this thesis includes Papers II–V and covers the topic of robust self-calibration using one-dimensional signals, such as sound or radio. We estimate both sender and receiver positions using TOA and TDOA measurements. The estimation process is divided in two parts, where the first is specific for TOA or TDOA and involves solving a relaxed version of the problem. The second step is common for different types of problems and involves an upgrade from the relaxed solution to the sought parameters. In this thesis we present numerically stable minimal solvers for both these steps for some different setups with senders and receivers. We also suggest frameworks for how to use these solvers together with RANSAC to achieve systems that are robust to outliers, noise and missing data. Additionally, in the last paper we focus on extending self-calibration results, especially for the sound source path, which often cannot be fully reconstructed immediately. The third part of the thesis, Papers VI–VIII, is concerned with the merging of already estimated maps. We mainly focus on maps created from image data, but the methods are applicable to sparse 3D maps coming from different sensor modalities. Merging of maps can be advantageous if there are several map representations of the same environment, or if there is a need for adding new information to an already existing map. We suggest a compact map representation with a small memory footprint, which we then use to fuse maps efficiently. We suggest one method for fusion of maps that are pre-aligned, and one where we additionally estimate the coordinate system. The merging utilises a compact approximation of the residuals and allows for deformations in the original maps. Furthermore, we present minimal solvers for 3D point matching with statistical deformations – which increases the number of inliers when the original maps contain errors
    • 

    corecore