202 research outputs found

    LiDAR-Based Object Tracking and Shape Estimation

    Get PDF
    Umfeldwahrnehmung stellt eine Grundvoraussetzung für den sicheren und komfortablen Betrieb automatisierter Fahrzeuge dar. Insbesondere bewegte Verkehrsteilnehmer in der unmittelbaren Fahrzeugumgebung haben dabei große Auswirkungen auf die Wahl einer angemessenen Fahrstrategie. Dies macht ein System zur Objektwahrnehmung notwendig, welches eine robuste und präzise Zustandsschätzung der Fremdfahrzeugbewegung und -geometrie zur Verfügung stellt. Im Kontext des automatisierten Fahrens hat sich das Box-Geometriemodell über die Zeit als Quasistandard durchgesetzt. Allerdings stellt die Box aufgrund der ständig steigenden Anforderungen an Wahrnehmungssysteme inzwischen häufig eine unerwünscht grobe Approximation der tatsächlichen Geometrie anderer Verkehrsteilnehmer dar. Dies motiviert einen Übergang zu genaueren Formrepräsentationen. In der vorliegenden Arbeit wird daher ein probabilistisches Verfahren zur gleichzeitigen Schätzung von starrer Objektform und -bewegung mittels Messdaten eines LiDAR-Sensors vorgestellt. Der Vergleich dreier Freiform-Geometriemodelle mit verschiedenen Detaillierungsgraden (Polygonzug, Dreiecksnetz und Surfel Map) gegenüber dem einfachen Boxmodell zeigt, dass die Reduktion von Modellierungsfehlern in der Objektgeometrie eine robustere und präzisere Parameterschätzung von Objektzuständen ermöglicht. Darüber hinaus können automatisierte Fahrfunktionen, wie beispielsweise ein Park- oder Ausweichassistent, von einem genaueren Wissen über die Fremdobjektform profitieren. Es existieren zwei Einflussgrößen, welche die Auswahl einer angemessenen Formrepräsentation maßgeblich beeinflussen sollten: Beobachtbarkeit (Welchen Detaillierungsgrad lässt die Sensorspezifikation theoretisch zu?) und Modell-Adäquatheit (Wie gut bildet das gegebene Modell die tatsächlichen Beobachtungen ab?). Auf Basis dieser Einflussgrößen wird in der vorliegenden Arbeit eine Strategie zur Modellauswahl vorgestellt, die zur Laufzeit adaptiv das am besten geeignete Formmodell bestimmt. Während die Mehrzahl der Algorithmen zur LiDAR-basierten Objektverfolgung ausschließlich auf Punktmessungen zurückgreift, werden in der vorliegenden Arbeit zwei weitere Arten von Messungen vorgeschlagen: Information über den vermessenen Freiraum wird verwendet, um über Bereiche zu schlussfolgern, welche nicht von Objektgeometrie belegt sein können. Des Weiteren werden LiDAR-Intensitäten einbezogen, um markante Merkmale wie Nummernschilder und Retroreflektoren zu detektieren und über die Zeit zu verfolgen. Eine ausführliche Auswertung auf über 1,5 Stunden von aufgezeichneten Fremdfahrzeugtrajektorien im urbanen Bereich und auf der Autobahn zeigen, dass eine präzise Modellierung der Objektoberfläche die Bewegungsschätzung um bis zu 30%-40% verbessern kann. Darüber hinaus wird gezeigt, dass die vorgestellten Methoden konsistente und hochpräzise Rekonstruktionen von Objektgeometrien generieren können, welche die häufig signifikante Überapproximation durch das einfache Boxmodell vermeiden

    TScan: Stationary LiDAR for Traffic and Safety Studies—Object Detection and Tracking

    Get PDF
    The ability to accurately measure and cost-effectively collect traffic data at road intersections is needed to improve their safety and operations. This study investigates the feasibility of using laser ranging technology (LiDAR) for this purpose. The proposed technology does not experience some of the problems of the current video-based technology but less expensive low-end sensors have limited density of points where measurements are collected that may bring new challenges. A novel LiDAR-based portable traffic scanner (TScan) is introduced in this report to detect and track various types of road users (e.g., trucks, cars, pedestrians, and bicycles). The scope of this study included the development of a signal processing algorithm and a user interface, their implementation on a TScan research unit, and evaluation of the unit performance to confirm its practicality for safety and traffic engineering applications. The TScan research unit was developed by integrating a Velodyne HDL-64E laser scanner within the existing Purdue University Mobile Traffic Laboratory which has a telescoping mast, video cameras, a computer, and an internal communications network. The low-end LiDAR sensor’s limited resolution of data points was further reduced by the distance, the light beam absorption on dark objects, and the reflection away from the sensor on oblique surfaces. The motion of the LiDAR sensor located at the top of the mast caused by wind and passing vehicles was accounted for with the readings from an inertial sensor atop the LiDAR. These challenges increased the need for an effective signal processing method to extract the maximum useful information. The developed TScan method identifies and extracts the background with a method applied in both the spherical and orthogonal coordinates. The moving objects are detected by clustering them; then the data points are tracked, first as clusters and then as rectangles fit to these clusters. After tracking, the individual moving objects are classified in categories, such as heavy and non-heavy vehicles, bicycles, and pedestrians. The resulting trajectories of the moving objects are stored for future processing with engineering applications. The developed signal-processing algorithm is supplemented with a convenient user interface for setting and running and inspecting the results during and after the data collection. In addition, one engineering application was developed in this study for counting moving objects at intersections. Another existing application, the Surrogate Safety Analysis Model (SSAM), was interfaced with the TScan method to allow extracting traffic conflicts and collisions from the TScan results. A user manual also was developed to explain the operation of the system and the application of the two engineering applications. Experimentation with the computational load and execution speed of the algorithm implemented on the MATLAB platform indicated that the use of a standard GPU for processing would permit real-time running of the algorithms during data collection. Thus, the post-processing phase of this method is less time consuming and more practical. Evaluation of the TScan performance was evaluated by comparing to the best available method: video frame-by-frame analysis with human observers. The results comparison included counting moving objects; estimating the positions of the objects, their speed, and direction of travel; and counting interactions between moving objects. The evaluation indicated that the benchmark method measured the vehicle positions and speeds at the accuracy comparable to the TScan performance. It was concluded that the TScan performance is sufficient for measuring traffic volumes, speeds, classifications, and traffic conflicts. The traffic interactions extracted by SSAM required automatic post-processing to eliminate vehicle interactions at too low speed and between pedestrians – events that could not be recognized by SSAM. It should be stressed that this post processing does not require human involvement. Nighttime conditions, light rain, and fog did not reduce the quality of the results. Several improvements of this new method are recommended and discussed in this report. The recommendations include implementing two TScan units at large intersections and adding the ability to collect traffic signal indications during data collection

    Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions

    Get PDF
    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle

    Multi-Robot FastSLAM for Large Domains

    Get PDF
    For a robot to build a map of its surrounding area, it must have accurate position information within the area, and to obtain accurate position information within the area, the robot needs to have an accurate map of the area. This circular problem is the Simultaneous Localization and Mapping (SLAM) problem. An efficient algorithm to solve it is FastSLAM, which is based on the Rao-Blackwellized particle filter. FastSLAM solves the SLAM problem for single-robot mapping using particles to represent the posterior of the robot pose and the map. Each particle of the filter possesses its own global map which is likely to be a grid map. The memory space required for these maps poses a serious limitation to the algorithm\u27s capability when the problem space is large. The problem will only get worse if the algorithm is adapted to multi-robot mapping. This thesis presents an alternate mapping algorithm that extends the single-robot FastSLAM algorithm to a multi-robot mapping algorithm that uses Absolute Space Representations (ASR) to represent the world. But each particle still maintains a local grid to map its vicinity and periodically this grid map is converted into an ASR. An ASR expresses a world in polygons requiring only a minimal amount of memory space. By using this altered mapping strategy, the problem faced in FastSLAM when mapping a large domain can be alleviated. In this algorithm, each robot maps separately, and when two robots encounter each other they exchange range and odometry readings from their last encounter to this encounter. Each robot then sets up another filter for the other robot\u27s data and incrementally updates its own map, incorporating the passed data and its own data at the same time. The passed data is processed in reverse by the receiving robot as if a virtual robot is back-tracking the path of the other robot. The algorithm is demonstrated using three data sets collected using a single robot equipped with odometry and laser-range finder sensors

    Environment perception based on LIDAR sensors for real road applications

    Get PDF
    The recent developments in applications that have been designed to increase road safety require reliable and trustworthy sensors. Keeping this in mind, the most up-to-date research in the field of automotive technologies has shown that LIDARs are a very reliable sensor family. In this paper, a new approach to road obstacle classification is proposed and tested. Two different LIDAR sensors are compared by focusing on their main characteristics with respect to road applications. The viability of these sensors in real applications has been tested, where the results of this analysis are presented.The recent developments in applications that have been designed to increase road safety require reliable and trustworthy sensors. Keeping this in mind, the most up-to-date research in the field of automotive technologies has shown that LIDARs are a very reliable sensor family. In this paper, a new approach to road obstacle classification is proposed and tested. Two different LIDAR sensors are compared by focusing on their main characteristics with respect to road applications. The viability of these sensors in real applications has been tested, where the results of this analysis are presented.The work reported in this paper has been partly funded by the Spanish Ministry of Science and Innovation (TRA2007- 67786-C02-01, TRA2007-67786-C02-02, and TRA2009- 07505) and the CAM project SEGVAUTO-II.Publicad

    Monocular-Based Pose Determination of Uncooperative Space Objects

    Get PDF
    Vision-based methods to determine the relative pose of an uncooperative orbiting object are investigated in applications to spacecraft proximity operations, such as on-orbit servicing, spacecraft formation flying, and small bodies exploration. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously in real-time by making use of only optical measurements. The Simultaneous Estimation of Pose and Shape (SEPS) algorithm that does not require a priori knowledge of the pose and shape of the target is presented. This makes use of a novel measurement equation and filter that can efficiently use optical flow information along with a star tracker to estimate the target's angular rotational and translational relative velocity as well as its center of gravity. Depending on the mission constraints, SEPS can be augmented by a more accurate offline, on-board 3D reconstruction of the target shape, which allows for the estimation of the pose as a known target. The use of Structure from Motion (SfM) for this purpose is discussed. A model-based approach for pose estimation of known targets is also presented. The architecture and implementation of both the proposed approaches are elucidated and their performance metrics are evaluated through numerical simulations by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Quantitative Performance Assessment of LiDAR-based Vehicle Contour Estimation Algorithms for Integrated Vehicle Safety Applications

    Get PDF
    Many nations and organizations are committing to achieving the goal of `Vision Zero\u27 and eliminate road traffic related deaths around the world. Industry continues to develop integrated safety systems to make vehicles safer, smarter and more capable in safety critical scenarios. Passive safety systems are now focusing on pre-crash deployment of restraint systems to better protect vehicle passengers. Current commonly used bounding box methods for shape estimation of crash partners lack the fidelity required for edge case collision detection and advanced crash modeling. This research presents a novel algorithm for robust and accurate contour estimation of opposing vehicles. The presented method is evaluated via a developed framework for key performance metrics and compared to alternative algorithms found in literature

    CONCEPTS FOR DEVELOPMENT OF SHUTTLE CAR AUTONOMOUS DOCKING WITH CONTINUOUS MINER USING 3-D DEPTH CAMERA

    Get PDF
    In recent years, a great deal of work has been conducted in automating mining equipment with the goals of increasing worker health and safety and increasing mine productivity. Automating vehicles such as load-haul-dumps been successful even in underground environments where the use of global positioning systems are unavailable. This thesis addresses automating the operation of a shuttle car, specifically focusing on positioning the shuttle car under the continuous miner coal-discharge conveyor during cutting and loading operations. This task requires recognition of the target and precise control of the tramming operation because a specific orientation and distance from the coal discharge conveyor is needed to avoid coal spillage. The proposed approach uses a stereo depth camera mounted on a small-scale mockup of a shuttle car. Machine learning algorithms are applied to the camera output to identify the continuous miner coal-discharge conveyor and segment the scene into various regions such as roof, ribs, and personnel. This information is used to plan the shuttle car path to the continuous miner coal-discharge conveyor. These methods are currently applied on 1/6th scale continuous miner and shuttle car in an appropriately scaled mock mine
    corecore