801 research outputs found

    Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities

    Full text link
    Visual-Inertial Odometry (VIO) algorithms typically rely on a point cloud representation of the scene that does not model the topology of the environment. A 3D mesh instead offers a richer, yet lightweight, model. Nevertheless, building a 3D mesh out of the sparse and noisy 3D landmarks triangulated by a VIO algorithm often results in a mesh that does not fit the real scene. In order to regularize the mesh, previous approaches decouple state estimation from the 3D mesh regularization step, and either limit the 3D mesh to the current frame or let the mesh grow indefinitely. We propose instead to tightly couple mesh regularization and state estimation by detecting and enforcing structural regularities in a novel factor-graph formulation. We also propose to incrementally build the mesh by restricting its extent to the time-horizon of the VIO optimization; the resulting 3D mesh covers a larger portion of the scene than a per-frame approach while its memory usage and computational complexity remain bounded. We show that our approach successfully regularizes the mesh, while improving localization accuracy, when structural regularities are present, and remains operational in scenes without regularities.Comment: 7 pages, 5 figures, ICRA accepte

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A Low Cost UWB Based Solution for Direct Georeferencing UAV Photogrammetry

    Get PDF
    Thanks to their flexibility and availability at reduced costs, Unmanned Aerial Vehicles (UAVs) have been recently used on a wide range of applications and conditions. Among these, they can play an important role in monitoring critical events (e.g., disaster monitoring) when the presence of humans close to the scene shall be avoided for safety reasons, in precision farming and surveying. Despite the very large number of possible applications, their usage is mainly limited by the availability of the Global Navigation Satellite System (GNSS) in the considered environment: indeed, GNSS is of fundamental importance in order to reduce positioning error derived by the drift of (low-cost) Micro-Electro-Mechanical Systems (MEMS) internal sensors. In order to make the usage of UAVs possible even in critical environments (when GNSS is not available or not reliable, e.g., close to mountains or in city centers, close to high buildings), this paper considers the use of a low cost Ultra Wide-Band (UWB) system as the positioning method. Furthermore, assuming the use of a calibrated camera, UWB positioning is exploited to achieve metric reconstruction on a local coordinate system. Once the georeferenced position of at least three points (e.g., positions of three UWB devices) is known, then georeferencing can be obtained, as well. The proposed approach is validated on a specific case study, the reconstruction of the façade of a university building. Average error on 90 check points distributed over the building façade, obtained by georeferencing by means of the georeferenced positions of four UWB devices at fixed positions, is 0.29 m. For comparison, the average error obtained by using four ground control points is 0.18 m

    Visual 3-D SLAM from UAVs

    Get PDF
    The aim of the paper is to present, test and discuss the implementation of Visual SLAM techniques to images taken from Unmanned Aerial Vehicles (UAVs) outdoors, in partially structured environments. Every issue of the whole process is discussed in order to obtain more accurate localization and mapping from UAVs flights. Firstly, the issues related to the visual features of objects in the scene, their distance to the UAV, and the related image acquisition system and their calibration are evaluated for improving the whole process. Other important, considered issues are related to the image processing techniques, such as interest point detection, the matching procedure and the scaling factor. The whole system has been tested using the COLIBRI mini UAV in partially structured environments. The results that have been obtained for localization, tested against the GPS information of the flights, show that Visual SLAM delivers reliable localization and mapping that makes it suitable for some outdoors applications when flying UAVs

    3D LiDAR Aided GNSS NLOS Mitigation for Reliable GNSS-RTK Positioning in Urban Canyons

    Full text link
    GNSS and LiDAR odometry are complementary as they provide absolute and relative positioning, respectively. Their integration in a loosely-coupled manner is straightforward but is challenged in urban canyons due to the GNSS signal reflections. Recent proposed 3D LiDAR-aided (3DLA) GNSS methods employ the point cloud map to identify the non-line-of-sight (NLOS) reception of GNSS signals. This facilitates the GNSS receiver to obtain improved urban positioning but not achieve a sub-meter level. GNSS real-time kinematics (RTK) uses carrier phase measurements to obtain decimeter-level positioning. In urban areas, the GNSS RTK is not only challenged by multipath and NLOS-affected measurement but also suffers from signal blockage by the building. The latter will impose a challenge in solving the ambiguity within the carrier phase measurements. In the other words, the model observability of the ambiguity resolution (AR) is greatly decreased. This paper proposes to generate virtual satellite (VS) measurements using the selected LiDAR landmarks from the accumulated 3D point cloud maps (PCM). These LiDAR-PCM-made VS measurements are tightly-coupled with GNSS pseudorange and carrier phase measurements. Thus, the VS measurements can provide complementary constraints, meaning providing low-elevation-angle measurements in the across-street directions. The implementation is done using factor graph optimization to solve an accurate float solution of the ambiguity before it is fed into LAMBDA. The effectiveness of the proposed method has been validated by the evaluation conducted on our recently open-sourced challenging dataset, UrbanNav. The result shows the fix rate of the proposed 3DLA GNSS RTK is about 30% while the conventional GNSS-RTK only achieves about 14%. In addition, the proposed method achieves sub-meter positioning accuracy in most of the data collected in challenging urban areas

    Microdrone-Based Indoor Mapping with Graph SLAM

    Get PDF
    Unmanned aerial vehicles offer a safe and fast approach to the production of three-dimensional spatial data on the surrounding space. In this article, we present a low-cost SLAM-based drone for creating exploration maps of building interiors. The focus is on emergency response mapping in inaccessible or potentially dangerous places. For this purpose, we used a quadcopter microdrone equipped with six laser rangefinders (1D scanners) and an optical sensor for mapping and positioning. The employed SLAM is designed to map indoor spaces with planar structures through graph optimization. It performs loop-closure detection and correction to recognize previously visited places, and to correct the accumulated drift over time. The proposed methodology was validated for several indoor environments. We investigated the performance of our drone against a multilayer LiDAR-carrying macrodrone, a vision-aided navigation helmet, and ground truth obtained with a terrestrial laser scanner. The experimental results indicate that our SLAM system is capable of creating quality exploration maps of small indoor spaces, and handling the loop-closure problem. The accumulated drift without loop closure was on average 1.1% (0.35 m) over a 31-m-long acquisition trajectory. Moreover, the comparison results demonstrated that our flying microdrone provided a comparable performance to the multilayer LiDAR-based macrodrone, given the low deviation between the point clouds built by both drones. Approximately 85 % of the cloud-to-cloud distances were less than 10 cm

    Design of a new approach to register biomechanical gait data, when combining lower limb powered exoskeletons controlled by neural machine interfaces and transcutaneous spinal current stimulation

    Get PDF
    To analyze the effect of robotic-aided gait rehabilitation controlled with brain-machine interfaces, it is necessary to ensure a strategy to assess gait biomechanics recording data that is not disturbed by the rehabilitation technologies. To this end, a protocol to measure the kinematics of the lower extremities on the three planes based on Inertial Measurement Units (IMUs) is developed. To evaluate the IMUs system accuracy and reliability, it is validated with a high-precision reference device, an optoelectronic system. The validation of the protocol is performed in one healthy subject in two steps: 1) testing four different configurations of the IMUs to identify the optimal gait data registration model, including the number and location of sensors, since these affect the system's output, and 2) validation of IMUs with Vicon through synchronously walking records (Condition 1) and exoskeleton-assisted walking (Condition 2). The within-day multiple correlation coefficients (CMCw) from Kadaba and its reformulation, the inter-protocol CMC (CMCp), are used respectively for Part 1 and Part 2 to assess the waveform similarity of each lower limb joint angle, removing the between-gait-cycle variability. In addition, other parameters are studied to assess the technological error and the differences between the biomechanical models, such as Pearson's correlation, range of motion, offset, and the Root Mean Square Error. For Part 1, it is concluded that the optimal configuration for the rest of the project is Model 2, showing good CMCw values for every joint angle (CMCw ≥ 0.8). During the walking test (Part 2, Condition 1) the CMCp shows that gait kinematics measured by both systems for the right limb are equivalent, demonstrating IMUs accuracy, for the hip and the knee flexion/extension (CMCp = 1), and for the knee adduction/abduction (CMCp = 0.91). For exoskeleton-assisted walking (Part 2, Condition 2), after adjusting the position of the IMUs located at the ankles, the gait kinematics for the right limb are equivalent for every joint in the sagittal plane (CMCp ≥ 0.9), for the knee and the ankle in frontal plane (CMCp ≥ 0.95), and for the hip in transversal plane (CMCp = 0.99)Para analizar el efecto de la rehabilitación de la marcha asistida por robots controlada con interfaces cerebro-máquina, es necesario garantizar una estrategia para evaluar los datos de registro de la biomecánica de la marcha de forma que no estén alterados por las tecnologías de rehabilitación. Para ello, se desarrolla un protocolo para medir la cinemática de las extremidades inferiores en los tres planos basado en Unidades de Medición Inercial (IMUs). Para evaluar la precisión y fiabilidad del sistema de IMUs, se valida con un dispositivo de referencia de alta precisión, un sistema optoelectrónico. La validación del protocolo se realiza en un sujeto sano en dos pasos: 1) prueba de cuatro configuraciones diferentes de las IMUs para identificar el modelo óptimo de registro de datos de la marcha, incluyendo el número y la ubicación de los sensores, ya que estos afectan a la salida del sistema, y 2) validación de las IMUs con Vicon a través de registros sincronizados de marcha (Condición 1) y marcha asistida por exoesqueleto (Condición 2). Los coeficientes de correlación múltiple dentro del día (CMCw) de Kadaba y su reformulación, el CMC interprotocolo (CMCp), se utilizan respectivamente en la Parte 1 y la Parte 2 para evaluar la similitud de la forma de onda de cada ángulo articular de la extremidad inferior, eliminando la variabilidad entre ciclos de la marcha. Además, se estudian otros parámetros para evaluar el error tecnológico y las diferencias entre los modelos biomecánicos, como la correlación de Pearson, el rango de movimiento, el desplazamiento y el error cuadrático medio. Para la Parte 1, se concluye que la configuración óptima para el resto del proyecto es el Modelo 2, mostrando buenos valores de CMCw para cada ángulo articular (CMCw ≥ 0.8). Durante la prueba de marcha (Parte 2, Condición 1), el CMCp muestra que la cinemática de la marcha medida por ambos sistemas para la extremidad derecha es equivalente, demostrando la precisión de las IMUs, para la flexo-extensión de la cadera y la rodilla (CMCp = 1), y para la aducción/abducción de la rodilla (CMCp = 0.91). Para la marcha asistida por exoesqueleto (Parte 2, Condición 2), tras ajustar la posición de las IMUs situadas en los tobillos, la cinemática de la marcha para la extremidad derecha es equivalente para cada articulación en el plano sagital (CMCp ≥ 0.9), para la rodilla y el tobillo en el plano frontal (CMCp ≥ 0.95), y para la cadera en el plano transversal (CMCp = 0.99)Per analitzar l'efecte de la rehabilitació de la marxa assistida per robòtica controlada amb interfícies cervell-màquina, cal garantir una estratègia per avaluar la biomecànica de la marxa registrant dades que no es vegi alterada per les tecnologies de rehabilitació. Amb aquesta finalitat, es desenvolupa un protocol per mesurar la cinemàtica de les extremitats inferiors en els tres plans basat en Unitats de Mesurament Inercial (IMU). Per avaluar la precisió i la fiabilitat del sistema IMU, es valida amb un dispositiu de referència d'alta precisió, un sistema optoelectrònic. La validació del protocol es realitza en un subjecte sa en dos passos: 1) provant quatre configuracions diferents de les IMU per identificar el model òptim de registre de dades de la marxa, inclòs el nombre i la ubicació dels sensors, ja que aquests afecten la sortida del sistema, i 2 ) validació de les IMU amb Vicon mitjançant registres de marxa sincrònica (Condició 1) i caminada assistida per exoesquelet (Condició 2). Els coeficients de correlació múltiple d'un dia (CMCw) de Kadaba i la seva reformulació, el CMC interprotocol (CMCp), s'utilitzen respectivament per a la part 1 i la part 2 per avaluar la similitud de la forma d'ona de cada angle d'articulació de l'extremitat inferior, eliminant l'entre- variabilitat del cicle de la marxa. A més, s'estudien altres paràmetres per avaluar l'error tecnològic i les diferències entre els models biomecànics, com ara la correlació de Pearson, el rang de moviment, l'offset i l'error quadràtic mitjà. Per a la part 1, es conclou que la configuració òptima per a la resta del projecte és el model 2, que mostra bons valors de CMCw per a cada angle d'articulació (CMCw ≥ 0,8). Durant la prova de marxa (part 2, condició 1), el CMCp mostra que la cinemàtica de la marxa mesurada pels dos sistemes per a l'extremitat dreta és equivalent, demostrant la precisió de les IMU, per al maluc i la flexió/extensió del genoll (CMCp = 1) i per a la adducció/abducció del genoll (CMCp = 0,91). Per a la marxa assistida per exoesquelet (Part 2, Condició 2), després d'ajustar la posició de les IMU situades als turmells, la cinemàtica de la marxa de l'extremitat dreta és equivalent per a cada articulació del pla sagital (CMCp ≥ 0,9), per al genoll. i el turmell en pla frontal (CMCp ≥ 0,95), i per al maluc en pla transversal (CMCp = 0,99
    corecore