1,456 research outputs found

    Calibração multi-modal de sensores a bordo do ATLASCAR2

    Get PDF
    Complex robot systems have several sensors with different modalities. In order to estimate the pose of these various multi-modal sensors, some works propose sequential pairwise calibrations, which have some inherent problems. ATLASCAR2 is an intelligent vehicle with several sensors of different modalities. The main goal of this work is to calibrate all sensors on board the ATLASCAR2. A ROS based interactive and semi-automatic approach, that works for any robot system, even the most complex ones, was developed. After the step of identifying which geometric transformations, between all robot description, should be estimated and collecting the detected data from each sensor, a least-squares optimization occurs to enhance the position and orientation of each one of the robot sensors. Results show that the four sensors simultaneous calibration is as good as the pairwise procedures used with the standard calibration tools, such as the OpenCV ones. In that way, the proposed solution brings a novel and advantageous methodology, since it fits to any complex robot system and calibrates all sensors at the same time.Os mais complexos sistemas robóticos possuem vários sensors de diferentes modalidades. Com o objetivo de se estimar a posição e orientação destes vários sensors multi-modais, existem alguns trabalhos que propõem calibrações sequenciais par a par: calibrações essas com alguns problemas inerentes. ATLASCAR2 é um veículo inteligente com vários sensores de diferentes modalidades. O objetivo principal deste projeto é calibrar todos os sensors a bordo do ATLASCAR2. Foi desenvolvida uma abordagem interativa e semi-automática, que funciona para qualquer robô em ROS, mesmo os mais complexos. Depois da etapa de identificação de quais as transformações geométricas, de entre toda a descrição do robô, devem ser estimadas e da coleção da informação recolhida por cada sensor, optimiza-se, através do método dos mínimos quadrados, os parâmetros de posição e orientação de cada um dos sensores do robô. Os resultados mostram que a calibração simultânea dos quatro sensores é tão boa quanto os procedimentos par a par usados pelas ferramentas de calibração padrão, como as do OpenCV. Assim sendo, a solução proposta apresenta uma nova e vantajosa metodologia, uma vez que se adequa a qualquer sistema robótico complexo e que calibra todos os seus sensors ao mesmo tempo.Mestrado em Engenharia Mecânic

    Continuous-Time Fixed-Lag Smoothing for LiDAR-Inertial-Camera SLAM

    Full text link
    Localization and mapping with heterogeneous multi-sensor fusion have been prevalent in recent years. To adequately fuse multi-modal sensor measurements received at different time instants and different frequencies, we estimate the continuous-time trajectory by fixed-lag smoothing within a factor-graph optimization framework. With the continuous-time formulation, we can query poses at any time instants corresponding to the sensor measurements. To bound the computation complexity of the continuous-time fixed-lag smoother, we maintain temporal and keyframe sliding windows with constant size, and probabilistically marginalize out control points of the trajectory and other states, which allows preserving prior information for future sliding-window optimization. Based on continuous-time fixed-lag smoothing, we design tightly-coupled multi-modal SLAM algorithms with a variety of sensor combinations, like the LiDAR-inertial and LiDAR-inertial-camera SLAM systems, in which online timeoffset calibration is also naturally supported. More importantly, benefiting from the marginalization and our derived analytical Jacobians for optimization, the proposed continuous-time SLAM systems can achieve real-time performance regardless of the high complexity of continuous-time formulation. The proposed multi-modal SLAM systems have been widely evaluated on three public datasets and self-collect datasets. The results demonstrate that the proposed continuous-time SLAM systems can achieve high-accuracy pose estimations and outperform existing state-of-the-art methods. To benefit the research community, we will open source our code at ~\url{https://github.com/APRIL-ZJU/clic}

    Sensor based real-time control of robots

    Get PDF

    Application of augmented reality and robotic technology in broadcasting: A survey

    Get PDF
    As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper

    The Coverage Problem in Video-Based Wireless Sensor Networks: A Survey

    Get PDF
    Wireless sensor networks typically consist of a great number of tiny low-cost electronic devices with limited sensing and computing capabilities which cooperatively communicate to collect some kind of information from an area of interest. When wireless nodes of such networks are equipped with a low-power camera, visual data can be retrieved, facilitating a new set of novel applications. The nature of video-based wireless sensor networks demands new algorithms and solutions, since traditional wireless sensor networks approaches are not feasible or even efficient for that specialized communication scenario. The coverage problem is a crucial issue of wireless sensor networks, requiring specific solutions when video-based sensors are employed. In this paper, it is surveyed the state of the art of this particular issue, regarding strategies, algorithms and general computational solutions. Open research areas are also discussed, envisaging promising investigation considering coverage in video-based wireless sensor networks

    Map building fusing acoustic and visual information using autonomous underwater vehicles

    Get PDF
    Author Posting. © The Author(s), 2012. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 30 (2013): 763–783, doi:10.1002/rob.21473.We present a system for automatically building 3-D maps of underwater terrain fusing visual data from a single camera with range data from multibeam sonar. The six-degree of freedom location of the camera relative to the navigation frame is derived as part of the mapping process, as are the attitude offsets of the multibeam head and the on-board velocity sensor. The system uses pose graph optimization and the square root information smoothing and mapping framework to simultaneously solve for the robot’s trajectory, the map, and the camera location in the robot’s frame. Matched visual features are treated within the pose graph as images of 3-D landmarks, while multibeam bathymetry submap matches are used to impose relative pose constraints linking robot poses from distinct tracklines of the dive trajectory. The navigation and mapping system presented works under a variety of deployment scenarios, on robots with diverse sensor suites. Results of using the system to map the structure and appearance of a section of coral reef are presented using data acquired by the Seabed autonomous underwater vehicle.The work described herein was funded by the National Science Foundation Censsis ERC under grant number EEC-9986821, and by the National Oceanic and Atmospheric Administration under grant number NA090AR4320129

    Calibration and assessment of electrochemical air quality sensors by co-location with regulatory-grade instruments

    Get PDF
    The use of low-cost air quality sensors for air pollution research has outpaced our understanding of their capabilities and limitations under real-world conditions, and there is thus a critical need for understanding and optimizing the performance of such sensors in the field. Here we describe the deployment, calibration, and evaluation of electrochemical sensors on the island of Hawai'i, which is an ideal test bed for characterizing such sensors due to its large and variable sulfur dioxide (SO 2 ) levels and lack of other co-pollutants. Nine custom-built SO 2 sensors were co-located with two Hawaii Department of Health Air Quality stations over the course of 5 months, enabling comparison of sensor output with regulatory-grade instruments under a range of realistic environmental conditions. Calibration using a nonparametric algorithm (k nearest neighbors) was found to have excellent performance (RMSE 0.997) across a wide dynamic range in SO 2 ( 2ppm). However, since nonparametric algorithms generally cannot extrapolate to conditions beyond those outside the training set, we introduce a new hybrid linear-nonparametric algorithm, enabling accurate measurements even when pollutant levels are higher than encountered during calibration. We find no significant change in instrument sensitivity toward SO 2 after 18 weeks and demonstrate that calibration accuracy remains high when a sensor is calibrated at one location and then moved to another. The performance of electrochemical SO 2 sensors is also strong at lower SO 2 mixing ratios ( < 25ppb), for which they exhibit an error of less than 2.5ppb. While some specific results of this study (calibration accuracy, performance of the various algorithms, etc.) may differ for measurements of other pollutant species in other areas (e.g., polluted urban regions), the calibration and validation approaches described here should be widely applicable to a range of pollutants, sensors, and environments.United States. Environmental Protection Agency (Grant RD-83618301

    Distributed Robotic Vision for Calibration, Localisation, and Mapping

    Get PDF
    This dissertation explores distributed algorithms for calibration, localisation, and mapping in the context of a multi-robot network equipped with cameras and onboard processing, comparing against centralised alternatives where all data is transmitted to a singular external node on which processing occurs. With the rise of large-scale camera networks, and as low-cost on-board processing becomes increasingly feasible in robotics networks, distributed algorithms are becoming important for robustness and scalability. Standard solutions to multi-camera computer vision require the data from all nodes to be processed at a central node which represents a significant single point of failure and incurs infeasible communication costs. Distributed solutions solve these issues by spreading the work over the entire network, operating only on local calculations and direct communication with nearby neighbours. This research considers a framework for a distributed robotic vision platform for calibration, localisation, mapping tasks where three main stages are identified: an initialisation stage where calibration and localisation are performed in a distributed manner, a local tracking stage where visual odometry is performed without inter-robot communication, and a global mapping stage where global alignment and optimisation strategies are applied. In consideration of this framework, this research investigates how algorithms can be developed to produce fundamentally distributed solutions, designed to minimise computational complexity whilst maintaining excellent performance, and designed to operate effectively in the long term. Therefore, three primary objectives are sought aligning with these three stages
    • …
    corecore