2,711 research outputs found

    Connected and Autonomous Vehicles Applications Development and Evaluation for Transportation Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPS) seamlessly integrate computation, networking and physical devices. A Connected and Autonomous Vehicle (CAV) system in which each vehicle can wirelessly communicate and share data with other vehicles or infrastructures (e.g., traffic signal, roadside unit), requires a Transportation Cyber-Physical System (TCPS) for improving safety and mobility, and reducing greenhouse gas emissions. Unfortunately, a typical TCPS with a centralized computing service cannot support real-time CAV applications due to the often unpredictable network latency, high data loss rate and expensive communication bandwidth, especially in a mobile network, such as a CAV environment. Edge computing, a new concept for the CPS, distributes the resources for communication, computation, control, and storage at different edges of the systems. TCPS with edge computing strategy forms an edge-centric TCPS. This edge-centric TCPS system can reduce data loss and data delivery delay, and fulfill the high bandwidth requirements. Within the edge-centric TCPS, Vehicle-to-X (V2X) communication, along with the in-vehicle sensors, provides a 360-degree view for CAVs that enables autonomous vehicles’ operation beyond the sensor range. The addition of wireless connectivity would improve the operational efficiency of CAVs by providing real-time roadway information, such as traffic signal phasing and timing information, downstream traffic incident alerts, and predicting future traffic queue information. In addition, temporal variation of roadway traffic can be captured by sharing Basic Safety Messages (BSMs) from each vehicle through the communication between vehicles as well as with roadside infrastructures (e.g., traffic signal, roadside unit) and traffic management centers. In the early days of CAVs, data will be collected only from a limited number of CAVs due to a low CAV penetration rate and not from other non-connected vehicles. This will result in noise in the traffic data because of low penetration rate of CAVs. This lack of data combined with the data loss rate in the wireless CAV environment makes it challenging to predict traffic behavior, which is dynamic over time. To address this challenge, it is important to develop and evaluate a machine learning technique to capture stochastic variation in traffic patterns over time. This dissertation focuses on the development and evaluation of various connected and autonomous vehicles applications in an edge-centric TCPS. It includes adaptive queue prediction, traffic data prediction, dynamic routing and Cooperative Adaptive Cruise Control (CACC) applications. An adaptive queue prediction algorithm is described in Chapter 2 for predicting real-time traffic queue status in an edge-centric TCPS. Chapter 3 presents noise reduction models to reduce the noise from the traffic data generated from the BSMs at different penetration of CAVs and evaluate the performance of the Long Short-Term Memory (LSTM) prediction model for predicting traffic data using the resulting filtered data set. The development and evaluation of a dynamic routing application in a CV environment is detailed in Chapter 4 to reduce incident recovery time and increase safety on a freeway. The development of an evaluation framework is detailed in Chapter 5 to evaluate car-following models for CACC controller design in terms of vehicle dynamics and string stability to ensure user acceptance is detailed in Chapter 5. Innovative methods presented in this dissertation were proven to be providing positive improvements in transportation mobility. These research will lead to the real-world deployment of these applications in an edge-centric TCPS as the dissertation focuses on the edge-centric TCPS deployment strategy. In addition, as multiple CAV applications as presented in this dissertation can be supported simultaneously by the same TCPS, public investments will only include infrastructure investments, such as investments in roadside infrastructure and back-end computing infrastructure. These connected and autonomous vehicle applications can potentially provide significant economic benefits compared to its cost

    Audio-Video Event Recognition System For Public Transport Security

    Get PDF
    International audienceThis paper presents an audio-video surveillance system for the automatic surveillance in public transport vehicle. The system comprises six modules including in particular three novel ones: (i) Face Detection and Tracking, (ii) Audio Event Detection and (iii) Audio-Video Scenario Recognition. The Face Detection and Tracking module is responsible for detecting and tracking faces of people in front of cameras. The Audio Event Detection module detects abnormal audio events which are precursor for detecting scenarios which have been predefined by end-users. The Audio-Video Scenario Recognition module performs high level interpretation of the observed objects by combining audio and video events based on spatio-temporal reasoning. The performance of the system is evaluated for a series of pre-defined audio, video and audio-video events specified using an audio-video event ontology

    Effects of Individual Differences on Measurements’ Drowsiness-Detection Performance

    Get PDF
    Individual differences (IDs) may reduce the detection-accuracy of drowsiness-driving by influencing measurements’ drowsiness-detection performance (MDDP). The purpose of this paper is to propose a model that can quantify the effects of IDs on MDDP and find measurements with less impact by IDs to build drowsiness-detection models. Through field experiments, drivers’ naturalistic driving data and subjective-drowsiness levels were collected, and drowsiness-related measurements were calculated using the double-layer sliding time window. In the model, MDDP was represented by |Z-statistics| of the Wilcoxon-test. First, the individual driver’s measurements were analysed by Wilcoxon-test. Next, drivers were combined in pairs, measurements of paired-driver combinations were analysed by Wilcoxon-test, and measurement’s IDs of paired-driver combinations were calculated. Finally, linear regression was used to fit the measurements’ IDs and changes of MDDP that equalled the individual driver’s |Z-statistics| minus the paired-driver combination’s |Z-statistics|, and the slope’s absolute value (|k|) indicated the effects of ID on the MDDP. As a result, |k| of the mean of the percentage of eyelid closure (MPECL) is the lowest (4.95), which illustrates MPECL is the least affected by IDs. The results contribute to the measurement selection of drowsiness-detection models considering IDs

    Sensor fusion in driving assistance systems

    Get PDF
    Mención Internacional en el título de doctorLa vida diaria en los países desarrollados y en vías de desarrollo depende en gran medida del transporte urbano y en carretera. Esta actividad supone un coste importante para sus usuarios activos y pasivos en términos de polución y accidentes, muy habitualmente debidos al factor humano. Los nuevos desarrollos en seguridad y asistencia a la conducción, llamados Advanced Driving Assistance Systems (ADAS), buscan mejorar la seguridad en el transporte, y a medio plazo, llegar a la conducción autónoma. Los ADAS, al igual que la conducción humana, están basados en sensores que proporcionan información acerca del entorno, y la fiabilidad de los sensores es crucial para las aplicaciones ADAS al igual que las capacidades sensoriales lo son para la conducción humana. Una de las formas de aumentar la fiabilidad de los sensores es el uso de la Fusión Sensorial, desarrollando nuevas estrategias para el modelado del entorno de conducción gracias al uso de diversos sensores, y obteniendo una información mejorada a partid de los datos disponibles. La presente tesis pretende ofrecer una solución novedosa para la detección y clasificación de obstáculos en aplicaciones de automoción, usando fusión vii sensorial con dos sensores ampliamente disponibles en el mercado: la cámara de espectro visible y el escáner láser. Cámaras y láseres son sensores comúnmente usados en la literatura científica, cada vez más accesibles y listos para ser empleados en aplicaciones reales. La solución propuesta permite la detección y clasificación de algunos de los obstáculos comúnmente presentes en la vía, como son ciclistas y peatones. En esta tesis se han explorado novedosos enfoques para la detección y clasificación, desde la clasificación empleando clusters de nubes de puntos obtenidas desde el escáner láser, hasta las técnicas de domain adaptation para la creación de bases de datos de imágenes sintéticas, pasando por la extracción inteligente de clusters y la detección y eliminación del suelo en nubes de puntos.Life in developed and developing countries is highly dependent on road and urban motor transport. This activity involves a high cost for its active and passive users in terms of pollution and accidents, which are largely attributable to the human factor. New developments in safety and driving assistance, called Advanced Driving Assistance Systems (ADAS), are intended to improve security in transportation, and, in the mid-term, lead to autonomous driving. ADAS, like the human driving, are based on sensors, which provide information about the environment, and sensors’ reliability is crucial for ADAS applications in the same way the sensing abilities are crucial for human driving. One of the ways to improve reliability for sensors is the use of Sensor Fusion, developing novel strategies for environment modeling with the help of several sensors and obtaining an enhanced information from the combination of the available data. The present thesis is intended to offer a novel solution for obstacle detection and classification in automotive applications using sensor fusion with two highly available sensors in the market: visible spectrum camera and laser scanner. Cameras and lasers are commonly used sensors in the scientific literature, increasingly affordable and ready to be deployed in real world applications. The solution proposed provides obstacle detection and classification for some obstacles commonly present in the road, such as pedestrians and bicycles. Novel approaches for detection and classification have been explored in this thesis, from point cloud clustering classification for laser scanner, to domain adaptation techniques for synthetic dataset creation, and including intelligent clustering extraction and ground detection and removal from point clouds.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Cristina Olaverri Monreal.- Secretario: Arturo de la Escalera Hueso.- Vocal: José Eugenio Naranjo Hernánde

    Automatic vehicle detection and tracking in aerial video

    Get PDF
    This thesis is concerned with the challenging tasks of automatic and real-time vehicle detection and tracking from aerial video. The aim of this thesis is to build an automatic system that can accurately localise any vehicles that appear in aerial video frames and track the target vehicles with trackers. Vehicle detection and tracking have many applications and this has been an active area of research during recent years; however, it is still a challenge to deal with certain realistic environments. This thesis develops vehicle detection and tracking algorithms which enhance the robustness of detection and tracking beyond the existing approaches. The basis of the vehicle detection system proposed in this thesis has different object categorisation approaches, with colour and texture features in both point and area template forms. The thesis also proposes a novel Self-Learning Tracking and Detection approach, which is an extension to the existing Tracking Learning Detection (TLD) algorithm. There are a number of challenges in vehicle detection and tracking. The most difficult challenge of detection is distinguishing and clustering the target vehicle from the background objects and noises. Under certain conditions, the images captured from Unmanned Aerial Vehicles (UAVs) are also blurred; for example, turbulence may make the vehicle shake during flight. This thesis tackles these challenges by applying integrated multiple feature descriptors for real-time processing. In this thesis, three vehicle detection approaches are proposed: the HSV-GLCM feature approach, the ISM-SIFT feature approach and the FAST-HoG approach. The general vehicle detection approaches used have highly flexible implicit shape representations. They are based on training samples in both positive and negative sets and use updated classifiers to distinguish the targets. It has been found that the detection results attained by using HSV-GLCM texture features can be affected by blurring problems; the proposed detection algorithms can further segment the edges of the vehicles from the background. Using the point descriptor feature can solve the blurring problem, however, the large amount of information contained in point descriptors can lead to processing times that are too long for real-time applications. So the FAST-HoG approach combining the point feature and the shape feature is proposed. This new approach is able to speed up the process that attains the real-time performance. Finally, a detection approach using HoG with the FAST feature is also proposed. The HoG approach is widely used in object recognition, as it has a strong ability to represent the shape vector of the object. However, the original HoG feature is sensitive to the orientation of the target; this method improves the algorithm by inserting the direction vectors of the targets. For the tracking process, a novel tracking approach was proposed, an extension of the TLD algorithm, in order to track multiple targets. The extended approach upgrades the original system, which can only track a single target, which must be selected before the detection and tracking process. The greatest challenge to vehicle tracking is long-term tracking. The target object can change its appearance during the process and illumination and scale changes can also occur. The original TLD feature assumed that tracking can make errors during the tracking process, and the accumulation of these errors could cause tracking failure, so the original TLD proposed using a learning approach in between the tracking and the detection by adding a pair of inspectors (positive and negative) to constantly estimate errors. This thesis extends the TLD approach with a new detection method in order to achieve multiple-target tracking. A Forward and Backward Tracking approach has been proposed to eliminate tracking errors and other problems such as occlusion. The main purpose of the proposed tracking system is to learn the features of the targets during tracking and re-train the detection classifier for further processes. This thesis puts particular emphasis on vehicle detection and tracking in different extreme scenarios such as crowed highway vehicle detection, blurred images and changes in the appearance of the targets. Compared with currently existing detection and tracking approaches, the proposed approaches demonstrate a robust increase in accuracy in each scenario

    Transportation mode recognition fusing wearable motion, sound and vision sensors

    Get PDF
    We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time

    Detection of Motorcycles in Urban Traffic Using Video Analysis: A Review

    Get PDF
    Motorcycles are Vulnerable Road Users (VRU) and as such, in addition to bicycles and pedestrians, they are the traffic actors most affected by accidents in urban areas. Automatic video processing for urban surveillance cameras has the potential to effectively detect and track these road users. The present review focuses on algorithms used for detection and tracking of motorcycles, using the surveillance infrastructure provided by CCTV cameras. Given the importance of results achieved by Deep Learning theory in the field of computer vision, the use of such techniques for detection and tracking of motorcycles is also reviewed. The paper ends by describing the performance measures generally used, publicly available datasets (introducing the Urban Motorbike Dataset (UMD) with quantitative evaluation results for different detectors), discussing the challenges ahead and presenting a set of conclusions with proposed future work in this evolving area

    Computer vision for advanced driver assistance systems

    Get PDF
    corecore