2,626 research outputs found

    3D LiDAR Point Cloud Processing Algorithms

    Get PDF
    In the race for autonomous vehicles and advanced driver assistance systems (ADAS), the automotive industry has energetically pursued research in the area of sensor suites to achieve such technological feats. Commonly used autonomous and ADAS sensor suites include multiples of cameras, radio detection and ranging (RADAR), light detection and ranging (LiDAR), and ultrasonic sensors. Great interest has been generated in the use of LiDAR sensors and the value added in an automotive application. LiDAR sensors can be used to detect and track vehicles, pedestrians, cyclists, and surrounding objects. A LiDAR sensor operates by emitting light amplification by stimulated emission of radiation (LASER) beams and receiving the reflected LASER beam to acquire relevant distance information. LiDAR reflections are organized in a three-dimensional environment known as a point cloud. A major challenge in modern autonomous automotive research is to be able to process the dimensional environmental data in real time. The LiDAR sensor used in this research is the Velodyne HDL 32E, which provides nearly 700,000 data points per second. The large amount of data produced by a LiDAR sensor must be processed in a highly efficient way to be effective. This thesis provides an algorithm to process the LiDAR data from the sensors user datagram protocol (UDP) packet to output geometric shapes that can be further analyzed in a sensor suite or utilized for Bayesian tracking of objects. The algorithm can be divided into three stages: Stage One - UDP packet extraction; Stage Two - data clustering; and Stage Three - shape extraction. Stage One organizes the LiDAR data from a negative to a positive vertical angle during packet extraction so that subsequent steps can fully exploit the programming efficiencies. Stage Two utilizes an adaptive breakpoint detector (ABD) for clustering objects based on a Euclidean distance threshold in the point cloud. Stage Three classifies each cluster into a shape that is either a point, line, L-shape, or a polygon using principal component analysis and shape fitting algorithms that have been modified to take advantage of the pre-organized data from Stage One. The proposed algorithm was written in the C language and the runtime was tested on a two Windows equipped machines where the algorithm completed the processing, on average, sparing 30% of the time between UDP data packets sent from the HDL32E. In comparison to related research, this algorithm performed over seven hundred and thirty-seven times faster

    Road Surface Feature Extraction and Reconstruction of Laser Point Clouds for Urban Environment

    Get PDF
    Automakers are developing end-to-end three-dimensional (3D) mapping system for Advanced Driver Assistance Systems (ADAS) and autonomous vehicles (AVs). Using geomatics, artificial intelligence, and SLAM (Simultaneous Localization and Mapping) systems to handle all stages of map creation, sensor calibration and alignment. It is crucial to have a system highly accurate and efficient as it is an essential part of vehicle controls. Such mapping requires significant resources to acquire geographic information (GIS and GPS), optical laser and radar spectroscopy, Lidar, and 3D modeling applications in order to extract roadway features (e.g., lane markings, traffic signs, road-edges) detailed enough to construct a “base map”. To keep this map current, it is necessary to update changes due to occurring events such as construction changes, traffic patterns, or growth of vegetation. The information of the road play a very important factor in road traffic safety and it is essential for for guiding autonomous vehicles (AVs), and prediction of upcoming road situations within AVs. The data size of the map is extensive due to the level of information provided with different sensor modalities for that reason a data optimization and extraction from three-dimensional (3D) mobile laser scanning (MLS) point clouds is presented in this thesis. The research shows the proposed hybrid filter configuration together with the dynamic developed mechanism provides significant reduction of the point cloud data with reduced computational or size constraints. The results obtained in this work are proven by a real-world system

    A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles

    Get PDF
    This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed

    Towards Developing Computer Vision Algorithms and Architectures for Real-world Applications

    Get PDF
    abstract: Computer vision technology automatically extracts high level, meaningful information from visual data such as images or videos, and the object recognition and detection algorithms are essential in most computer vision applications. In this dissertation, we focus on developing algorithms used for real life computer vision applications, presenting innovative algorithms for object segmentation and feature extraction for objects and actions recognition in video data, and sparse feature selection algorithms for medical image analysis, as well as automated feature extraction using convolutional neural network for blood cancer grading. To detect and classify objects in video, the objects have to be separated from the background, and then the discriminant features are extracted from the region of interest before feeding to a classifier. Effective object segmentation and feature extraction are often application specific, and posing major challenges for object detection and classification tasks. In this dissertation, we address effective object flow based ROI generation algorithm for segmenting moving objects in video data, which can be applied in surveillance and self driving vehicle areas. Optical flow can also be used as features in human action recognition algorithm, and we present using optical flow feature in pre-trained convolutional neural network to improve performance of human action recognition algorithms. Both algorithms outperform the state-of-the-arts at their time. Medical images and videos pose unique challenges for image understanding mainly due to the fact that the tissues and cells are often irregularly shaped, colored, and textured, and hand selecting most discriminant features is often difficult, thus an automated feature selection method is desired. Sparse learning is a technique to extract the most discriminant and representative features from raw visual data. However, sparse learning with \textit{L1} regularization only takes the sparsity in feature dimension into consideration; we improve the algorithm so it selects the type of features as well; less important or noisy feature types are entirely removed from the feature set. We demonstrate this algorithm to analyze the endoscopy images to detect unhealthy abnormalities in esophagus and stomach, such as ulcer and cancer. Besides sparsity constraint, other application specific constraints and prior knowledge may also need to be incorporated in the loss function in sparse learning to obtain the desired results. We demonstrate how to incorporate similar-inhibition constraint, gaze and attention prior in sparse dictionary selection for gastroscopic video summarization that enable intelligent key frame extraction from gastroscopic video data. With recent advancement in multi-layer neural networks, the automatic end-to-end feature learning becomes feasible. Convolutional neural network mimics the mammal visual cortex and can extract most discriminant features automatically from training samples. We present using convolutinal neural network with hierarchical classifier to grade the severity of Follicular Lymphoma, a type of blood cancer, and it reaches 91\% accuracy, on par with analysis by expert pathologists. Developing real world computer vision applications is more than just developing core vision algorithms to extract and understand information from visual data; it is also subject to many practical requirements and constraints, such as hardware and computing infrastructure, cost, robustness to lighting changes and deformation, ease of use and deployment, etc.The general processing pipeline and system architecture for the computer vision based applications share many similar design principles and architecture. We developed common processing components and a generic framework for computer vision application, and a versatile scale adaptive template matching algorithm for object detection. We demonstrate the design principle and best practices by developing and deploying a complete computer vision application in real life, building a multi-channel water level monitoring system, where the techniques and design methodology can be generalized to other real life applications. The general software engineering principles, such as modularity, abstraction, robust to requirement change, generality, etc., are all demonstrated in this research.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Multi-Object Tracking System based on LiDAR and RADAR for Intelligent Vehicles applications

    Get PDF
    El presente Trabajo Fin de Grado tiene como objetivo el desarrollo de un Sistema de Detección y Multi-Object Tracking 3D basado en la fusión sensorial de LiDAR y RADAR para aplicaciones de conducción autónoma basándose en algoritmos tradicionales de Machine Learning. La implementación realizada está basada en Python, ROS y cumple requerimientos de tiempo real. En la etapa de detección de objetos se utiliza el algoritmo de segmentación del plano RANSAC, para una posterior extracción de Bounding Boxes mediante DBSCAN. Una Late Sensor Fusion mediante Intersection over Union 3D y un sistema de tracking BEV-SORT completan la arquitectura propuesta.This Final Degree Project aims to develop a 3D Multi-Object Tracking and Detection System based on the Sensor Fusion of LiDAR and RADAR for autonomous driving applications based on traditional Machine Learning algorithms. The implementation is based on Python, ROS and complies with real-time requirements. In the Object Detection stage, the RANSAC plane segmentation algorithm is used, for a subsequent extraction of Bounding Boxes using DBSCAN. A Late Sensor Fusion using Intersection over Union 3D and a BEV-SORT tracking system complete the proposed architecture.Grado en Ingeniería en Electrónica y Automática Industria

    Motion tracking on embedded systems: vision-based vehicle tracking using image alignment with symmetrical function.

    Get PDF
    Cheung, Lap Chi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 91-95).Abstracts in English and Chinese.Chapter 1. --- INTRODUCTION --- p.1Chapter 1.1. --- Background --- p.1Chapter 1.1.1. --- Introduction to Intelligent Vehicle --- p.1Chapter 1.1.2. --- Typical Vehicle Tracking Systems for Rear-end Collision Avoidance --- p.2Chapter 1.1.3. --- Passive VS Active Vehicle Tracking --- p.3Chapter 1.1.4. --- Vision-based Vehicle Tracking Systems --- p.4Chapter 1.1.5. --- Characteristics of Computing Devices on Vehicles --- p.5Chapter 1.2. --- Motivation and Objectives --- p.6Chapter 1.3. --- Major Contributions --- p.7Chapter 1.3.1. --- A 3-phase Vision-based Vehicle Tracking Framework --- p.7Chapter 1.3.2. --- Camera-to-vehicle Distance Measurement by Single Camera --- p.9Chapter 1.3.3. --- Real Time Vehicle Detection --- p.10Chapter 1.3.4. --- Real Time Vehicle Tracking using Simplified Image Alignment --- p.10Chapter 1.4. --- Evaluation Platform --- p.11Chapter 1.5. --- Thesis Organization --- p.11Chapter 2. --- RELATED WORK --- p.13Chapter 2.1. --- Stereo-based Vehicle Tracking --- p.13Chapter 2.2. --- Motion-based Vehicle Tracking --- p.16Chapter 2.3. --- Knowledge-based Vehicle Tracking --- p.18Chapter 2.4. --- Commercial Systems --- p.19Chapter 3. --- 3-PHASE VISION-BASED VEHICLE TRACKING FRAMEWORK --- p.22Chapter 3.1. --- Introduction to the 3-phase Framework --- p.22Chapter 3.2. --- Vehicle Detection --- p.23Chapter 3.2.1. --- Overview of Vehicle Detection --- p.23Chapter 3.2.2. --- Locating the Vehicle Center - Symmetrical Measurement --- p.25Chapter 3.2.3. --- Locating the Vehicle Roof and Bottom --- p.28Chapter 3.2.4. --- Locating the Vehicle Sides - Over-complete Haar Transform --- p.30Chapter 3.3. --- Vehicle Template Tracking Image Alignment --- p.37Chapter 3.3.5. --- Overview of Vehicle Template Tracking --- p.37Chapter 3.3.6. --- Goal of Image Alignment --- p.41Chapter 3.3.7. --- Alternative Image Alignment - Compositional Image Alignment --- p.42Chapter 3.3.8. --- Efficient Image Alignment - Inverse Compositional Algorithm --- p.43Chapter 3.4. --- Vehicle Template Update --- p.46Chapter 3.4.1. --- Situation of Vehicle lost --- p.46Chapter 3.4.2. --- Template Filling by Updating the positions of Vehicle Features --- p.48Chapter 3.5. --- Experiments and Discussions --- p.49Chapter 3.5. 1. --- Experiment Setup --- p.49Chapter 3.5.2. --- Successful Tracking Percentage --- p.50Chapter 3.6. --- Comparing with other tracking methodologies --- p.52Chapter 3.6.1. --- 1-phase Vision-based Vehicle Tracking --- p.52Chapter 3.6.2. --- Image Correlation --- p.54Chapter 3.6.3. --- Continuously Adaptive Mean Shift --- p.58Chapter 4. --- CAMERA TO-VEHICLE DISTANCE MEASUREMENT BY SINGLE CAMERA --- p.61Chapter 4.1 --- The Principle of Law of Perspective --- p.61Chapter 4.2. --- Distance Measurement by Single Camera --- p.62Chapter 5. --- REAL TIME VEHICLE DETECTION --- p.66Chapter 5.1. --- Introduction --- p.66Chapter 5.2. --- Timing Analysis of Vehicle Detection --- p.66Chapter 5.3. --- Symmetrical Measurement Optimization --- p.67Chapter 5.3.1. --- Diminished Gradient Image for Symmetrical Measurement --- p.67Chapter 5.3.2. --- Replacing Division by Multiplication Operations --- p.71Chapter 5.4. --- Over-complete Haar Transform Optimization --- p.73Chapter 5.4.1. --- Characteristics of Over-complete Haar Transform --- p.75Chapter 5.4.2. --- Pre-compntation of Haar block --- p.74Chapter 5.5. --- Summary --- p.77Chapter 6. --- REAL TIME VEHICLE TRACKING USING SIMPLIFIED IMAGE ALIGNMENT --- p.78Chapter 6.1. --- Introduction --- p.78Chapter 6.2. --- Timing Analysis of Original Image Alignment --- p.78Chapter 6.3. --- Simplified Image Alignment --- p.80Chapter 6.3.1. --- Reducing the Number of Parameters in Affine Transformation --- p.80Chapter 6.3.2. --- Size Reduction of Image A ligmnent Matrixes --- p.85Chapter 6.4. --- Experiments and Discussions --- p.85Chapter 6.4.1. --- Successful Tracking Percentage --- p.86Chapter 6.4.2. --- Timing Improvement --- p.87Chapter 7. --- CONCLUSIONS --- p.89Chapter 8. --- BIBLIOGRAPHY --- p.9

    Feature-based object tracking in maritime scenes.

    Get PDF
    A monitoring of presence, location and activity of various objects on the sea is essential for maritime navigation and collision avoidance. Mariners normally rely on two complementary methods of the monitoring: radar and satellite-based aids and human observation. Though radar aids are relatively accurate at long distances, their capability of detecting small, unmanned or non-metallic craft that generally do not reflect radar waves sufficiently enough, is limited. The mariners, therefore, rely in such cases on visual observations. The visual observation is often facilitated by using cameras overlooking the sea that can also provide intensified infra-red images. These systems or nevertheless merely enhance the image and the burden of the tedious and error-prone monitoring task still rests with the operator. This thesis addresses the drawbacks of both methods by presenting a framework consisting of a set of machine vision algorithms that facilitate the monitoring tasks in maritime environment. The framework detects and tracks objects in a sequence of images captured by a camera mounted either on a board of a vessel or on a static platform over-looking the sea. The detection of objects is independent of their appearance and conditions such as weather and time of the day. The output of the framework consists of locations and motions of all detected objects with respect to a fixed point in the scene. All values are estimated in real-world units, i. e. location is expressed in metres and velocity in knots. The consistency of the estimates is maintained by compensating for spurious effects such as vibration of the camera. In addition, the framework continuously checks for predefined events such as collision threats or area intrusions, raising an alarm when any such event occurs. The development and evaluation of the framework is based on sequences captured under conditions corresponding to a designated application. The independence of the detection and tracking on the appearance of the sceneand objects is confirmed by a final cross-validation of the framework on previously unused sequences. Potential applications of the framework in various areas of maritime environment including navigation, security, surveillance and others are outlined. Limitations to the presented framework are identified and possible solutions suggested. The thesis concludes with suggestions to further directions of the research presented

    The North Atlantic Waveguide and Downstream Impact Experiment

    Get PDF
    The North Atlantic Waveguide and Downstream Impact Experiment (NAWDEX) explored the impact of diabatic processes on disturbances of the jet stream and their influence on downstream high-impact weather through the deployment of four research aircraft, each with a sophisticated set of remote sensing and in situ instruments, and coordinated with a suite of ground-based measurements. A total of 49 research flights were performed, including, for the first time, coordinated flights of the four aircraft: the German High Altitude and Long Range Research Aircraft (HALO), the Deutsches Zentrum für Luft- und Raumfahrt (DLR) Dassault Falcon 20, the French Service des Avions Français Instrumentés pour la Recherche en Environnement (SAFIRE) Falcon 20, and the British Facility for Airborne Atmospheric Measurements (FAAM) BAe 146. The observation period from 17 September to 22 October 2016 with frequently occurring extratropical and tropical cyclones was ideal for investigating midlatitude weather over the North Atlantic. NAWDEX featured three sequences of upstream triggers of waveguide disturbances, as well as their dynamic interaction with the jet stream, subsequent development, and eventual downstream weather impact on Europe. Examples are presented to highlight the wealth of phenomena that were sampled, the comprehensive coverage, and the multifaceted nature of the measurements. This unique dataset forms the basis for future case studies and detailed evaluations of weather and climate predictions to improve our understanding of diabatic influences on Rossby waves and the downstream impacts of weather systems affecting Europe

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    corecore