4,819 research outputs found

    Precision laser range finder system design for Advanced Technology Laboratory applications

    Get PDF
    Preliminary system design of a pulsed precision ruby laser rangefinder system is presented which has a potential range resolution of 0.4 cm when atmospheric effects are negligible. The system being proposed for flight testing on the advanced technology laboratory (ATL) consists of a modelocked ruby laser transmitter, course and vernier rangefinder receivers, optical beacon retroreflector tracking system, and a network of ATL tracking retroreflectors. Performance calculations indicate that spacecraft to ground ranging accuracies of 1 to 2 cm are possible

    Target tracking using laser range finder with occlusion

    Get PDF
    Mestrado em Engenharia MecânicaEste trabalho apresenta uma técnica para a detecção e seguimento de múltiplos alvos móveis usando um sensor de distâncias laser em situações de forte oclusão. O processo inicia-se com a aplicação de filtros temporais aos dados em bruto de modo a eliminar o ruído do sensor seguindo-se de uma segmentação em várias fases com o objectivo de contornar o problema da oclusão. Os segmentos obtidos representam objectos presentes no ambiente. Para cada segmento um ponto representativo da sua posição no mundo é calculado, este ponto é definido de modo a ser relativamente invariante à rotação e mudança de forma do objecto. Para fazer o seguimento de alvos uma lista de objectos a seguir é mantida, todos os objectos visíveis são associados a objectos desta lista usando técnicas de procura baseadas na previsão do movimento dos objectos. Uma zona de procura de forma elíptica é definida para cada objecto da lista sendo nesta zona que se dará a associação. A previsão do movimento é feita com base em dois modelos de movimento, um de velocidade constante e um de aceleração constante e com aplicação de filtros de Kalman. O algoritmo foi testado em diversas condições reais e mostrou-se robusto e eficaz no seguimento de pessoas mesmo em situações de extensa oclusão. ABSTRACT: In this work a technique for the detection and tracking of multiple moving targets in situations of strong occlusion using a laser rangefinder is presented. The process starts by the application of temporal filters to the raw data in order to remove noise followed by a multi phase segmentation with the goal of overcoming occlusions. The resulting segments represent objects in the environment. For each segment a representative point is defined; this point is calculated to better represent the object while keeping some invariance to rotation and shape changes. In order to perform the tracking, a list of objects to follow is maintained; all visible objects are associated with objects from this list using search techniques based on the predicted motion of objects. A search zone shaped as an ellipse is defined for each object; it is in this zone that the association is preformed. The motion prediction is based in two motion models, one with constant velocity and the other with constant acceleration and in the application of Kalman filters. The algorithm was tested in diverse real conditions and shown to be robust and effective in the tracking of people even in situations of long occlusions

    Small image laser range finder for planetary rover

    Get PDF
    A variety of technical subjects need to be solved before planetary rover navigation could be a part of future missions. The sensors which will perceive terrain environment around the rover will require critical development efforts. The image laser range finder (ILRF) discussed here is one of the candidate sensors because of its advantage in providing range data required for its navigation. The authors developed a new compact-sized ILRF which is a quarter of the size of conventional ones. Instead of the current two directional scanning system which is comprised of nodding and polygon mirrors, the new ILRF is equipped with the new concept of a direct polygon mirror driving system, which successfully made its size compact to accommodate the design requirements. The paper reports on the design concept and preliminary technical specifications established in the current development phase

    Data Fusion of Laser Range Finder and Video Camera

    Get PDF
    For this project, a technique of fusing the data from sensors are developed in order to detect, track and classify in a static background environment. The proposed method is to utilize a single video camera and a laser range finder to determine the range of a generally specified targets or objects and classification of those particular targets. The module aims to acquire or detect objects or obstacles and provide the distance from the module to the target in real-time application using real live video. The proposed method to achieve the objective is using MATLAB to perform data fusion of the data collected from laser range finder and video camera. Background subtraction is used in this project to perform object detection

    Multi sensor fusion of camera and 3D laser range finder for object recognition

    Get PDF
    Proceedings of: 2010 IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), September 5-7, 2010, Salt Lake City, USAThis paper proposes multi sensor fusion based on an effective calibration method for a perception system designed for mobile robots and intended for later object recognition. The perception system consists of a camera and a three-dimensional laser range finder. The three-dimensional laser range finder is based on a two-dimensional laser scanner and a pan-tilt unit as a moving platform. The calibration permits the coalescence of the two most important sensors for three-dimensional environment perception, namely a laser scanner and a camera. Both sensors permit multi sensor fusion consisting of color and depth information. The calibration process based upon a specific calibration pattern is used to define the extrinsic parameters and calculate the transformation between a laser range finder and a camera. The found transformation assigns an exact position and the color information to each point of the surroundings. As a result, the advantages of both sensors can be combined. The resulting structure consists of colored unorganized point clouds. The achieved results can be visualized with OpenGL and used for surface reconstruction. This way, typical robotic tasks like object recognition, grasp calculation or handling of objects can be realized. The results of our experiments are presented in this paper.European Community's Seventh Framework Progra

    Extrinsic Calibration of a Camera and Laser Range Finder

    Get PDF
    We describes theoretical and experimental results for the extrinsic calibration of sensor platform consisting of a camera and a laser range finder. The proposed technique requires the system to observe a planar pattern in several poses, and the constraints are based upon data captured simultaneously from the camera and the laser range finder. The planar pattern surface and the laser scanline on the planar pattern are related, so these data constrain the relative position and orientation of the camera and laser range finder. The calibration procedure starts with a closed-from solution, which provides initial conditions for a subsequent nonlinear refinement. We present the results from both computer simulated data and an implementation on a B21rT M Mobile Robot from iRobot Corporation, using a Sony firewire digital camera and SICK PLS laser scanner

    Parse geometry from a line: Monocular depth estimation with partial laser observation

    Full text link
    © 2017 IEEE. Many standard robotic platforms are equipped with at least a fixed 2D laser range finder and a monocular camera. Although those platforms do not have sensors for 3D depth sensing capability, knowledge of depth is an essential part in many robotics activities. Therefore, recently, there is an increasing interest in depth estimation using monocular images. As this task is inherently ambiguous, the data-driven estimated depth might be unreliable in robotics applications. In this paper, we have attempted to improve the precision of monocular depth estimation by introducing 2D planar observation from the remaining laser range finder without extra cost. Specifically, we construct a dense reference map from the sparse laser range data, redefining the depth estimation task as estimating the distance between the real and the reference depth. To solve the problem, we construct a novel residual of residual neural network, and tightly combine the classification and regression losses for continuous depth estimation. Experimental results suggest that our method achieves considerable promotion compared to the state-of-the-art methods on both NYUD2 and KITTI, validating the effectiveness of our method on leveraging the additional sensory information. We further demonstrate the potential usage of our method in obstacle avoidance where our methodology provides comprehensive depth information compared to the solution using monocular camera or 2D laser range finder alone

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground

    Torso detection and tracking using a 2D laser range finder

    Full text link
    Detecting and tracking people in populated environments has various applications including, robotics, healthcare, automotive, security and defence. In this paper, we present an algorithm for people detection and tracking based on a two dimensional laser rage finder (LRF). The LRF was mounted on a mobile robotic platform to scan a torso section of a person. The tracker is designed to discard spurious targets based on the log likelihood ratio and can effectively handle short term occlusions. Long term occlusions are considered as new tracks. Performance of the algorithm is analysed based on experiments, which shows appealing results
    corecore