47 research outputs found

    Truck Trailer Classification Using Side-Fire Light Detection And Ranging (LiDAR) Data

    Get PDF
    Classification of vehicles into distinct groups is critical for many applications, including freight and commodity flow modeling, pavement management and design, tolling, air quality monitoring, and intelligent transportation systems. The Federal Highway Administration (FHWA) developed a standardized 13-category vehicle classification ruleset, which meets the needs of many traffic data user applications. However, some applications need high-resolution data for modeling and analysis. For example, the type of commodity being carried must be known in the freight modeling framework. Unfortunately, this information is not available at the state or metropolitan level, or it is expensive to obtain from current resources. Nevertheless, using current emerging technologies such as Light Detection and Ranging (LiDAR) data, it may be possible to predict commodity type from truck body types or trailers. For example, refrigerated trailers are commonly used to transport perishable produce and meat products, tank trailers are for fuel and other liquid products, and specialized trailers carry livestock. The main goal of this research is to develop methods using side-fired LiDAR data to distinguish between specific types of truck trailers beyond what is generally possible with traditional vehicle classification sensors (e.g., piezoelectric sensors and inductive loop detectors). A multi-array LiDAR sensor enables the construction of 3D-profiles of vehicles since it measures the distance to the object reflecting its emitted light. In this research 16-beam LiDAR sensor data are processed to estimate vehicle speed and extract useful information and features to classify semi-trailer trucks hauling ten different types of trailers: a reefer and non-reefer dry van, 20 ft and 40 ft intermodal containers, a 40 ft reefer intermodal container, platforms, tanks, car transporters, open-top van/dump and aggregated other types (i.e., livestock, logging, etc.). In addition to truck-trailer classification, methods are developed to detect empty and loaded platform semi-trailers. K-Nearest Neighbors (KNN), Multilayer Perceptron (MLP), Adaptive Boosting (AdaBoost), and Support Vector Machines (SVM) supervised machine learning algorithms are implemented on the field data collected on a freeway segment that includes over seven-thousand trucks. The results show that different trailer body types and empty and loaded platform semi-trailers can be classified with a very high level of accuracy ranging from 85% to 98% and 99%, respectively. To enhance the accuracy by which multiple LiDAR frames belonging to the same truck are merged, a new algorithm is developed to estimate the speed while the truck is within the field of view of the sensor. This algorithm is based on tracking tires and utilizes line detection concepts from image processing. The proposed algorithm improves the results and allows creating more accurate 2D and 3D truck profiles as documented in this thesis

    Development and evaluation of low cost 2-d lidar based traffic data collection methods

    Get PDF
    Traffic data collection is one of the essential components of a transportation planning exercise. Granular traffic data such as volume count, vehicle classification, speed measurement, and occupancy, allows managing transportation systems more effectively. For effective traffic operation and management, authorities require deploying many sensors across the network. Moreover, the ascending efforts to achieve smart transportation aspects put immense pressure on planning authorities to deploy more sensors to cover an extensive network. This research focuses on the development and evaluation of inexpensive data collection methodology by using two-dimensional (2-D) Light Detection and Ranging (LiDAR) technology. LiDAR is adopted since it is economical and easily accessible technology. Moreover, its 360-degree visibility and accurate distance information make it more reliable. To collect traffic count data, the proposed method integrates a Continuous Wavelet Transform (CWT), and Support Vector Machine (SVM) into a single framework. Proof-of-Concept (POC) test is conducted in three different places in Newark, New Jersey to examine the performance of the proposed method. The POC test results demonstrate that the proposed method achieves acceptable performances, resulting in 83% ~ 94% accuracy. It is discovered that the proposed method\u27s accuracy is affected by the color of the exterior surface of a vehicle since some colored surfaces do not produce enough reflective rays. It is noticed that the blue and black colors are less reflective, while white-colored surfaces produce high reflective rays. A methodology is proposed that comprises K-means clustering, inverse sensor model, and Kalman filter to obtain trajectories of the vehicles at the intersections. The primary purpose of vehicle detection and tracking is to obtain the turning movement counts at an intersection. A K-means clustering is an unsupervised machine learning technique that clusters the data into different groups by analyzing the smallest mean of a data point from the centroid. The ultimate objective of applying K-mean clustering is to identify the difference between pedestrians and vehicles. An inverse sensor model is a state model of occupancy grid mapping that localizes the detected vehicles on the grid map. A constant velocity model based Kalman filter is defined to track the trajectory of the vehicles. The data are collected from two intersections located in Newark, New Jersey, to study the accuracy of the proposed method. The results show that the proposed method has an average accuracy of 83.75%. Furthermore, the obtained R-squared value for localization of the vehicles on the grid map is ranging between 0.87 to 0.89. Furthermore, a primary cost comparison is made to study the cost efficiency of the developed methodology. The cost comparison shows that the proposed methodology based on 2-D LiDAR technology can achieve acceptable accuracy at a low price and be considered a smart city concept to conduct extensive scale data collection

    TScan: Stationary LiDAR for Traffic and Safety Studies—Object Detection and Tracking

    Get PDF
    The ability to accurately measure and cost-effectively collect traffic data at road intersections is needed to improve their safety and operations. This study investigates the feasibility of using laser ranging technology (LiDAR) for this purpose. The proposed technology does not experience some of the problems of the current video-based technology but less expensive low-end sensors have limited density of points where measurements are collected that may bring new challenges. A novel LiDAR-based portable traffic scanner (TScan) is introduced in this report to detect and track various types of road users (e.g., trucks, cars, pedestrians, and bicycles). The scope of this study included the development of a signal processing algorithm and a user interface, their implementation on a TScan research unit, and evaluation of the unit performance to confirm its practicality for safety and traffic engineering applications. The TScan research unit was developed by integrating a Velodyne HDL-64E laser scanner within the existing Purdue University Mobile Traffic Laboratory which has a telescoping mast, video cameras, a computer, and an internal communications network. The low-end LiDAR sensor’s limited resolution of data points was further reduced by the distance, the light beam absorption on dark objects, and the reflection away from the sensor on oblique surfaces. The motion of the LiDAR sensor located at the top of the mast caused by wind and passing vehicles was accounted for with the readings from an inertial sensor atop the LiDAR. These challenges increased the need for an effective signal processing method to extract the maximum useful information. The developed TScan method identifies and extracts the background with a method applied in both the spherical and orthogonal coordinates. The moving objects are detected by clustering them; then the data points are tracked, first as clusters and then as rectangles fit to these clusters. After tracking, the individual moving objects are classified in categories, such as heavy and non-heavy vehicles, bicycles, and pedestrians. The resulting trajectories of the moving objects are stored for future processing with engineering applications. The developed signal-processing algorithm is supplemented with a convenient user interface for setting and running and inspecting the results during and after the data collection. In addition, one engineering application was developed in this study for counting moving objects at intersections. Another existing application, the Surrogate Safety Analysis Model (SSAM), was interfaced with the TScan method to allow extracting traffic conflicts and collisions from the TScan results. A user manual also was developed to explain the operation of the system and the application of the two engineering applications. Experimentation with the computational load and execution speed of the algorithm implemented on the MATLAB platform indicated that the use of a standard GPU for processing would permit real-time running of the algorithms during data collection. Thus, the post-processing phase of this method is less time consuming and more practical. Evaluation of the TScan performance was evaluated by comparing to the best available method: video frame-by-frame analysis with human observers. The results comparison included counting moving objects; estimating the positions of the objects, their speed, and direction of travel; and counting interactions between moving objects. The evaluation indicated that the benchmark method measured the vehicle positions and speeds at the accuracy comparable to the TScan performance. It was concluded that the TScan performance is sufficient for measuring traffic volumes, speeds, classifications, and traffic conflicts. The traffic interactions extracted by SSAM required automatic post-processing to eliminate vehicle interactions at too low speed and between pedestrians – events that could not be recognized by SSAM. It should be stressed that this post processing does not require human involvement. Nighttime conditions, light rain, and fog did not reduce the quality of the results. Several improvements of this new method are recommended and discussed in this report. The recommendations include implementing two TScan units at large intersections and adding the ability to collect traffic signal indications during data collection

    Multimodal perception for autonomous driving

    Get PDF
    Mención Internacional en el título de doctorAutonomous driving is set to play an important role among intelligent transportation systems in the coming decades. The advantages of its large-scale implementation –reduced accidents, shorter commuting times, or higher fuel efficiency– have made its development a priority for academia and industry. However, there is still a long way to go to achieve full self-driving vehicles, capable of dealing with any scenario without human intervention. To this end, advances in control, navigation and, especially, environment perception technologies are yet required. In particular, the detection of other road users that may interfere with the vehicle’s trajectory is a key element, since it allows to model the current traffic situation and, thus, to make decisions accordingly. The objective of this thesis is to provide solutions to some of the main challenges of on-board perception systems, such as extrinsic calibration of sensors, object detection, and deployment on real platforms. First, a calibration method for obtaining the relative transformation between pairs of sensors is introduced, eliminating the complex manual adjustment of these parameters. The algorithm makes use of an original calibration pattern and supports LiDARs, and monocular and stereo cameras. Second, different deep learning models for 3D object detection using LiDAR data in its bird’s eye view projection are presented. Through a novel encoding, the use of architectures tailored to image detection is proposed to process the 3D information of point clouds in real time. Furthermore, the effectiveness of using this projection together with image features is analyzed. Finally, a method to mitigate the accuracy drop of LiDARbased detection networks when deployed in ad-hoc configurations is introduced. For this purpose, the simulation of virtual signals mimicking the specifications of the desired real device is used to generate new annotated datasets that can be used to train the models. The performance of the proposed methods is evaluated against other existing alternatives using reference benchmarks in the field of computer vision (KITTI and nuScenes) and through experiments in open traffic with an automated vehicle. The results obtained demonstrate the relevance of the presented work and its suitability for commercial use.La conducción autónoma está llamada a jugar un papel importante en los sistemas inteligentes de transporte de las próximas décadas. Las ventajas de su implementación a larga escala –disminución de accidentes, reducción del tiempo de trayecto, u optimización del consumo– han convertido su desarrollo en una prioridad para la academia y la industria. Sin embargo, todavía hay un largo camino por delante hasta alcanzar una automatización total, capaz de enfrentarse a cualquier escenario sin intervención humana. Para ello, aún se requieren avances en las tecnologías de control, navegación y, especialmente, percepción del entorno. Concretamente, la detección de otros usuarios de la carretera que puedan interferir en la trayectoria del vehículo es una pieza fundamental para conseguirlo, puesto que permite modelar el estado actual del tráfico y tomar decisiones en consecuencia. El objetivo de esta tesis es aportar soluciones a algunos de los principales retos de los sistemas de percepción embarcados, como la calibración extrínseca de los sensores, la detección de objetos, y su despliegue en plataformas reales. En primer lugar, se introduce un método para la obtención de la transformación relativa entre pares de sensores, eliminando el complejo ajuste manual de estos parámetros. El algoritmo hace uso de un patrón de calibración propio y da soporte a cámaras monoculares, estéreo, y LiDAR. En segundo lugar, se presentan diferentes modelos de aprendizaje profundo para la detección de objectos en 3D utilizando datos de escáneres LiDAR en su proyección en vista de pájaro. A través de una nueva codificación, se propone la utilización de arquitecturas de detección en imagen para procesar en tiempo real la información tridimensional de las nubes de puntos. Además, se analiza la efectividad del uso de esta proyección junto con características procedentes de imágenes. Por último, se introduce un método para mitigar la pérdida de precisión de las redes de detección basadas en LiDAR cuando son desplegadas en configuraciones ad-hoc. Para ello, se plantea la simulación de señales virtuales con las características del modelo real que se quiere utilizar, generando así nuevos conjuntos anotados para entrenar los modelos. El rendimiento de los métodos propuestos es evaluado frente a otras alternativas existentes haciendo uso de bases de datos de referencia en el campo de la visión por computador (KITTI y nuScenes), y mediante experimentos en tráfico abierto empleando un vehículo automatizado. Los resultados obtenidos demuestran la relevancia de los trabajos presentados y su viabilidad para un uso comercial.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Jesús García Herrero.- Secretario: Ignacio Parra Alonso.- Vocal: Gustavo Adolfo Peláez Coronad

    Vehicle localization with enhanced robustness for urban automated driving

    Get PDF

    Sensor fusion in driving assistance systems

    Get PDF
    Mención Internacional en el título de doctorLa vida diaria en los países desarrollados y en vías de desarrollo depende en gran medida del transporte urbano y en carretera. Esta actividad supone un coste importante para sus usuarios activos y pasivos en términos de polución y accidentes, muy habitualmente debidos al factor humano. Los nuevos desarrollos en seguridad y asistencia a la conducción, llamados Advanced Driving Assistance Systems (ADAS), buscan mejorar la seguridad en el transporte, y a medio plazo, llegar a la conducción autónoma. Los ADAS, al igual que la conducción humana, están basados en sensores que proporcionan información acerca del entorno, y la fiabilidad de los sensores es crucial para las aplicaciones ADAS al igual que las capacidades sensoriales lo son para la conducción humana. Una de las formas de aumentar la fiabilidad de los sensores es el uso de la Fusión Sensorial, desarrollando nuevas estrategias para el modelado del entorno de conducción gracias al uso de diversos sensores, y obteniendo una información mejorada a partid de los datos disponibles. La presente tesis pretende ofrecer una solución novedosa para la detección y clasificación de obstáculos en aplicaciones de automoción, usando fusión vii sensorial con dos sensores ampliamente disponibles en el mercado: la cámara de espectro visible y el escáner láser. Cámaras y láseres son sensores comúnmente usados en la literatura científica, cada vez más accesibles y listos para ser empleados en aplicaciones reales. La solución propuesta permite la detección y clasificación de algunos de los obstáculos comúnmente presentes en la vía, como son ciclistas y peatones. En esta tesis se han explorado novedosos enfoques para la detección y clasificación, desde la clasificación empleando clusters de nubes de puntos obtenidas desde el escáner láser, hasta las técnicas de domain adaptation para la creación de bases de datos de imágenes sintéticas, pasando por la extracción inteligente de clusters y la detección y eliminación del suelo en nubes de puntos.Life in developed and developing countries is highly dependent on road and urban motor transport. This activity involves a high cost for its active and passive users in terms of pollution and accidents, which are largely attributable to the human factor. New developments in safety and driving assistance, called Advanced Driving Assistance Systems (ADAS), are intended to improve security in transportation, and, in the mid-term, lead to autonomous driving. ADAS, like the human driving, are based on sensors, which provide information about the environment, and sensors’ reliability is crucial for ADAS applications in the same way the sensing abilities are crucial for human driving. One of the ways to improve reliability for sensors is the use of Sensor Fusion, developing novel strategies for environment modeling with the help of several sensors and obtaining an enhanced information from the combination of the available data. The present thesis is intended to offer a novel solution for obstacle detection and classification in automotive applications using sensor fusion with two highly available sensors in the market: visible spectrum camera and laser scanner. Cameras and lasers are commonly used sensors in the scientific literature, increasingly affordable and ready to be deployed in real world applications. The solution proposed provides obstacle detection and classification for some obstacles commonly present in the road, such as pedestrians and bicycles. Novel approaches for detection and classification have been explored in this thesis, from point cloud clustering classification for laser scanner, to domain adaptation techniques for synthetic dataset creation, and including intelligent clustering extraction and ground detection and removal from point clouds.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Cristina Olaverri Monreal.- Secretario: Arturo de la Escalera Hueso.- Vocal: José Eugenio Naranjo Hernánde

    Proof-of-concept of a single-point Time-of-Flight LiDAR system and guidelines towards integrated high-accuracy timing, advanced polarization sensing and scanning with a MEMS micromirror

    Get PDF
    Dissertação de mestrado integrado em Engenharia Física (área de especialização em Dispositivos, Microssistemas e Nanotecnologias)The core focus of the work reported herein is the fulfillment of a functional Light Detection and Ranging (LiDAR) sensor to validate the direct Time-of-Flight (ToF) ranging concept and the acquisition of critical knowledge regarding pivotal aspects jeopardizing the sensor’s performance, for forthcoming improvements aiming a realistic sensor targeted towards automotive applications. Hereupon, the ToF LiDAR system is implemented through an architecture encompassing both optical and electronical functions and is subsequently characterized under a sequence of test procedures usually applied in benchmarking of LiDAR sensors. The design employs a hybrid edge-emitting laser diode (pulsed at 6kHz, 46ns temporal FWHM, 7ns rise-time; 919nm wavelength with 5nm FWHM), a PIN photodiode to detect the back-reflected radiation, a transamplification stage and two Time-to-Digital Converters (TDCs), with leading-edge discrimination electronics to mark the transit time between emission and detection events. Furthermore, a flexible modular design is adopted using two separate Printed Circuit Boards (PCBs), comprising the transmitter (TX) and the receiver (RX), i.e. detection and signal processing. The overall output beam divergence is 0.4º×1º and an optical peak power of 60W (87% overall throughput) is realized. The sensor is tested indoors from 0.56 to 4.42 meters, and the distance is directly estimated from the pulses transit time. The precision within these working distances ranges from 4cm to 7cm, reflected in a Signal-to-Noise Ratio (SNR) between 12dB and 18dB. The design requires a calibration procedure to correct systematic errors in the range measurements, induced by two sources: the timing offset due to architecture-inherent differences in the optoelectronic paths and a supplementary bias resulting from the design, which renders an intensity dependence and is denoted time-walk. The calibrated system achieves a mean accuracy of 1cm. Two distinct target materials are used for characterization and performance evaluation: a metallic automotive paint and a diffuse material. This selection is representative of two extremes of actual LiDAR applications. The optical and electronic characterization is thoroughly detailed, including the recognition of a good agreement between empirical observations and simulations in ZEMAX, for optical design, and in a SPICE software, for the electrical subsystem. The foremost meaningful limitation of the implemented design is identified as an outcome of the leading-edge discrimination. A proposal for a Constant Fraction Discriminator addressing sub-millimetric accuracy is provided to replace the previous signal processing element. This modification is mandatory to virtually eliminate the aforementioned systematic bias in range sensing due to the intensity dependency. A further crucial addition is a scanning mechanism to supply the required Field-of-View (FOV) for automotive usage. The opto-electromechanical guidelines to interface a MEMS micromirror scanner, achieving a 46º×17º FOV, with the LiDAR sensor are furnished. Ultimately, a proof-of-principle to the use of polarization in material classification for advanced processing is carried out, aiming to complement the ToF measurements. The original design is modified to include a variable wave retarder, allowing the simultaneous detection of orthogonal linear polarization states using a single detector. The material classification with polarization sensing is tested with the previously referred materials culminating in an 87% and 11% degree of linear polarization retention from the metallic paint and the diffuse material, respectively, computed by Stokes parameters calculus. The procedure was independently validated under the same conditions with a micro-polarizer camera (92% and 13% polarization retention).O intuito primordial do trabalho reportado no presente documento é o desenvolvimento de um sensor LiDAR funcional, que permita validar o conceito de medição direta do tempo de voo de pulsos óticos para a estimativa de distância, e a aquisição de conhecimento crítico respeitante a aspetos fundamentais que prejudicam a performance do sensor, ambicionando melhorias futuras para um sensor endereçado para aplicações automóveis. Destarte, o sistema LiDAR é implementado através de uma arquitetura que engloba tanto funções óticas como eletrónicas, sendo posteriormente caracterizado através de uma sequência de testes experimentais comumente aplicáveis em benchmarking de sensores LiDAR. O design tira partido de um díodo de laser híbrido (pulsado a 6kHz, largura temporal de 46ns; comprimento de onda de pico de 919nm e largura espetral de 5nm), um fotodíodo PIN para detetar a radiação refletida, um andar de transamplificação e dois conversores tempo-digital, com discriminação temporal com threshold constante para marcar o tempo de trânsito entre emissão e receção. Ademais, um design modular flexível é adotado através de duas PCBs independentes, compondo o transmissor e o recetor (deteção e processamento de sinal). A divergência global do feixe emitido para o ambiente circundante é 0.4º×1º, apresentando uma potência ótica de pico de 60W (eficiência de 87% na transmissão). O sensor é testado em ambiente fechado, entre 0.56 e 4.42 metros. A precisão dentro das distâncias de trabalho varia entre 4cm e 7cm, o que se reflete numa razão sinal-ruído entre 12dB e 18dB. O design requer calibração para corrigir erros sistemáticos nas distâncias adquiridas devido a duas fontes: o desvio no ToF devido a diferenças nos percursos optoeletrónicos, inerentes à arquitetura, e uma dependência adicional da intensidade do sinal refletido, induzida pela técnica de discriminação implementada e denotada time-walk. A exatidão do sistema pós-calibração perfaz um valor médio de 1cm. Dois alvos distintos são utilizados durante a fase de caraterização e avaliação performativa: uma tinta metálica aplicada em revestimentos de automóveis e um material difusor. Esta seleção é representativa de dois cenários extremos em aplicações reais do LiDAR. A caraterização dos subsistemas ótico e eletrónico é minuciosamente detalhada, incluindo a constatação de uma boa concordância entre observações empíricas e simulações óticas em ZEMAX e elétricas num software SPICE. O principal elemento limitante do design implementado é identificado como sendo a técnica de discriminação adotada. Por conseguinte, é proposta a substituição do anterior bloco por uma técnica de discriminação a uma fração constante do pulso de retorno, com exatidões da ordem sub-milimétrica. Esta modificação é imperativa para eliminar o offset sistemático nas medidas de distância, decorrente da dependência da intensidade do sinal. Uma outra inclusão de extrema relevância é um mecanismo de varrimento que assegura o cumprimento dos requisitos de campo de visão para aplicações automóveis. As diretrizes para a integração de um micro-espelho no sensor concebido são providenciadas, permitindo atingir um campo de visão de 46º×17º. Conclusivamente, é feita uma prova de princípio para a utilização da polarização como complemento das medições do tempo de voo, de modo a suportar a classificação de materiais em processamento avançado. A arquitetura original é modificada para incluir uma lâmina de atraso variável, permitindo a deteção de estados de polarização ortogonais com um único fotodetetor. A classificação de materiais através da aferição do estado de polarização da luz refletida é testada para os materiais supramencionados, culminando numa retenção de polarização de 87% (tinta metálica) e 11% (difusor), calculados através dos parâmetros de Stokes. O procedimento é independentemente validado com uma câmara polarimétrica nas mesmas condições (retenção de 92% e 13%)

    Real-time performance-focused on localisation techniques for autonomous vehicle: a review

    Get PDF

    DEVELOPMENT OF A NOVEL VEHICLE GUIDANCE SYSTEM: VEHICLE RISK MITIGATION AND CONTROL

    Get PDF
    Over a half of fatal vehicular crashes occur due to vehicles leaving their designated travel lane and entering other lanes or leaving the roadway. Lane departure accidents also result in billions of dollars in cost to society. Recent vehicle technology research into driver assistance and vehicle autonomy has developed to assume various driving tasks. However, these systems are do not work for all roads and travel conditions. The purpose of this research study was to begin the development a novel vehicle guidance approach, specifically studying how the vehicle interacts with the system to detect departures and control the vehicle A literature review was conducted, covering topics such as vehicle sensors, control methods, environment recognition, driver assistance methods, vehicle autonomy methods, communication, positioning, and regulations. Researchers identified environment independence, recognition accuracy, computational load, and industry collaboration as areas of need in intelligent transportation. A novel method of vehicle guidance was conceptualized known as the MwRSF Smart Barrier. The vision of this method is to send verified road path data, based AASHTO design and vehicle dynamic aspects, to guide the vehicle. To further development research was done to determine various aspects of vehicle dynamics and trajectory trends can be used to predict departures and control the vehicle. Tire-to-road friction capacity and roll stability were identified as traits that can be prevented with future road path knowledge. Road departure characteristics were mathematically developed. It was shown that lateral departure, orientation error, and curvature error are parametrically linked, and discussion was given for these metrics as the basis for of departure prediction. A three parallel PID controller for modulating vehicle steering inputs to a virtual vehicle to remain on the path was developed. The controller was informed by a matrix of XY road coordinates, road curvature and future road curvature and was able to keep the simulated vehicle to within 1 in of the centerline target path. Recommendations were made for the creation of warning modules, threshold levels, improvements to be applied to vehicle controller, and ultimately full-scale testing. Advisor: Cody S. Stoll
    corecore