5 research outputs found

    Range Information Characterization of the Hokuyo UST-20LX LIDAR Sensor

    Get PDF
    This paper presents a study on the data measurements that the Hokuyo UST-20LX Laser Rangefinder produces, which compiles into an overall characterization of the LiDAR sensor relative to indoor environments. The range measurements, beam divergence, angular resolution, error effect due to some common painted and wooden surfaces, and the error due to target surface orientation are analyzed. It was shown that using a statistical average of sensor measurements provides a more accurate range measurement. It was also shown that the major source of errors for the Hokuyo UST-20LX sensor was caused by something that will be referred to as “mixed pixels”. Additional error sources are target surface material, and the range relative to the sensor. The purpose of this paper was twofold: (1) to describe a series of tests that can be performed to characterize various aspects of a LIDAR system from a user perspective, and (2) present a detailed characterization of the commonly-used Hokuyo UST-20LX LIDAR sensor

    Sensors and Actuators Communication and Synchronization for a Mobile Manipulator

    Get PDF
    Modern mobile manipulator hardware architecture is combination of mechanical, electrical, software and control units. Integrating numbers of mechanical and electrical components in the system rise the number of parameters to control. Hence the issues of controlling such systems to achieve the best performance is critical. A key aspect for this purpose is integration of sensors and actuators to provide a low level of mobile manipulator control and my thesis involve in providing a low level of control for iMoro mobile manipulator. To operate such a mobile manipulator several intelligent components needs to be installed and cooperate at same time. Due to the application that iMoro made for, the platform is equipped with 8 actuators and number of sensors. Therefore control of such a system is complicated and demands accurate synchronization and communication among four legs. Since iMoro robot is equipped with IMU, in following chapter IMU is calibrated and the process of providing meaningful data out of raw data explained by modeling the IMU. In addition, an equation is developed for robust calibration of IMU and camera

    Laser-Based Detection and Tracking of Moving Obstacles to Improve Perception of Unmanned Ground Vehicles

    Get PDF
    El objetivo de esta tesis es desarrollar un sistema que mejore la etapa de percepción de vehículos terrestres no tripulados (UGVs) heterogéneos, consiguiendo con ello una navegación robusta en términos de seguridad y ahorro energético en diferentes entornos reales, tanto interiores como exteriores. La percepción debe tratar con obstáculos estáticos y dinámicos empleando sensores heterogéneos, tales como, odometría, sensor de distancia láser (LIDAR), unidad de medida inercial (IMU) y sistema de posicionamiento global (GPS), para obtener la información del entorno con la precisión más alta, permitiendo mejorar las etapas de planificación y evitación de obstáculos. Para conseguir este objetivo, se propone una etapa de mapeado de obstáculos dinámicos (DOMap) que contiene la información de los obstáculos estáticos y dinámicos. La propuesta se basa en una extensión del filtro de ocupación bayesiana (BOF) incluyendo velocidades no discretizadas. La detección de velocidades se obtiene con Flujo Óptico sobre una rejilla de medidas LIDAR discretizadas. Además, se gestionan las oclusiones entre obstáculos y se añade una etapa de seguimiento multi-hipótesis, mejorando la robustez de la propuesta (iDOMap). La propuesta ha sido probada en entornos simulados y reales con diferentes plataformas robóticas, incluyendo plataformas comerciales y la plataforma (PROPINA) desarrollada en esta tesis para mejorar la colaboración entre equipos de humanos y robots dentro del proyecto ABSYNTHE. Finalmente, se han propuesto métodos para calibrar la posición del LIDAR y mejorar la odometría con una IMU

    Conception et intégration d'un capteur LIDAR 3D pour la navigation autonome des robots mobiles en terrain inconnu

    Get PDF
    Résumé L’étude de la planète Mars suscite un intérêt grandissant au sein de la communauté scientifique. Étant donné la distance séparant la Terre de cet astre, ainsi que l’environnement particulièrement hostile y régnant, son exploration tire avantage de l’utilisation de robots mobiles (rovers). Les fenêtres de communication peu nombreuses de même que les délais importants justifient le déploiement d’intelligence artificielle sur ces plates-formes mobiles afin de maximiser leur autonomie. L’un des enjeux primordiaux est alors la capacité que possèdent de tels robots de naviguer de façon autonome et donc de percevoir l’environnement au moyen de systèmes de vision avancés.Le projet de recherche dont il est question dans ce mémoire s’articule autour de ce thème et est réalisé en collaboration avec l’Agence spatiale canadienne (ASC). L’objectif principal vise la conception d’un système de vision tridimensionnelle permettant à un robot mobile de naviguer de façon autonome. Il concerne plus particulièrement la conception, l’intégration et l’étude de CORIAS (COntinuous Range and Intensity Acquisition System), un système de vision utilisant la technologie lidar (LIght Detection And Ranging). Le système en question utilise comme capteur principal un lidar LMS111 de la compagnie SICK. L’appareil développé au cours de cette recherche répond non seulement à l’objectif principal, mais il est également doté des caractéristiques suivantes : • La reproduction tridimensionnelle de l’environnement s’effectue dans un rayon maximum de 20 mètres. • L’acquisition des données se fait à raison d’un débit maximal de 27 050 points par seconde. • Le temps requis pour balayer complètement l’environnement avec des paramètres d’opération typiques (une résolution de 0,25° en élévation et de 0,50° en azimut) est d’environ 29 secondes. • Le système peut transmettre les mesures d’intensité associées aux points acquis.• Les résolutions verticale (élévation) et horizontale (azimut) sont configurables et assez fines pour détecter des obstacles.• Le système présente un niveau raisonnable de protection face aux conditions environnementales hostiles (poussière, pluie, neige, etc).----------Abstract The study of Mars is of growing interest among the scientific community. Given the large distance between this planet and the Earth, as well as the hostile environment prevailing there, its exploration takes advantage of using rovers. The rare communication windows, along with the important delays occuring during communications, justify artificial intelligence deployment on these mobile platforms in order to maximize their autonomy. Hence, one of the crucial issues is the ability the rover has to autonomously navigate and thus, to properly detect its environment with the help of advanced vision systems. The research project discussed in this thesis focuses on this theme and is done in collaboration with the Canadian Space Agency (CSA). The main objective is to design a three-dimensional vision system enabling a mobile robot to navigate autonomously. It relates more particularly to the design, integration and study of CORIAS (COntinuous Range and Intensity Acquisition System), a vision system using lidar (LIght Detection And Ranging). The system uses a LMS111, manufactured by SICK, as its main sensor. The device developed in this research project not only meets the main objective, but it also has the following characteristics:• The three-dimensional reproduction of the environment is performed within a 20 meter radius.• The maximum data acquisition rate is 27 050 points per second.• The time required to complete a full-coverage scan performed with typical operation parameters (0,25° elevation and 0,50° azimuth resolutions) is around 29 seconds.• The system is able to transmit the intensity measurements associated with the acquired points.• Vertical (elevation) and horizontal (azimuth) resolutions are configurable and fine enough to detect obstacles.• The system has a reasonable level of protection against bad weather conditions (dust, rain, snow, etc.).CORIAS only requires a 24 volts DC power supply and an Ethernet link to be operated. It can be installed easily on a large variety of platforms. The rover’s on-board computer is responsible for communicating with the vision system and provides the commands it needs to accomplish. The microcontroller, which is the central part of the system, operates on Linux and acts as a TCP-IP server

    Fully Automated Laser Range Calibration

    No full text
    We present a novel method for fully automated exterior calibration of a 2D scanning laser range sensor that attains accurate pose with respect to a fixed 3D reference frame. This task is crucial for applications that attempt to recover self-consistent 3D environment maps and produce accurately registered or fused sensor data. A key contribution of our approach lies in the design of a class of calibration target objects whose pose can be reliably recognized from a single observation (i.e. from one 2D range data stripe). Unlike other techniques, we do not require simultaneous camera views or motion of the sensor, making our approach simple, flexible and environment-independent. In this paper we illustrate the target geometry and derive the relationship between a single 2D range scan and the 3D sensor pose. We describe an algorithm for closed-form solution of the 6 DOF pose that minimizes an algebraic error metric, and an iterative refinement scheme that subsequently minimizes geometric error. Finally, we report performance and stability of our technique on synthetic and real data sets, and demonstrate accuracy within 1 degree of orientation and 3 cm of position in a realistic configuration.
    corecore