31,313 research outputs found
Fitting cornering speed models with one-class support vector machines
© 2019 IEEE. This paper investigates the modelling of cornering speed using road curvature as a predictive variable, which is of interest for advanced driver assistance system (ADAS) applications including eco-driving assistance and curve warning. Such models are common in the driver modelling and human factors literature, yet lack reliable parameter estimation methods, requiring an ad-hoc evaluation of the upper envelope of the data followed by linear regression to that envelope. Considering the space of possible combinations of lateral acceleration and cornering speed, we cast the modelling of cornering speed as an 'outlier detection' problem which may be solved using one-class Support Vector Machine (SVM) methods from machine learning. For an existing cornering model, we suggest a fitting method using a specific choice of kernel function in a one-class SVM. As the parameters of the cornering speed model may be recovered from the SVM solution, this provides a more robust and reproducible fitting method for this model of cornering speed than the existing envelope-based approaches. In addition, this gives comparable outlier detection performance to generic SVM methods based on Radial Basis Function (RBF) kernels while reducing training times by a factor of 10, indicating potential for use in adaptive eco-driving assistance systems that require retraining either online or between drives
A Radio-fingerprinting-based Vehicle Classification System for Intelligent Traffic Control in Smart Cities
The measurement and provision of precise and upto-date traffic-related key
performance indicators is a key element and crucial factor for intelligent
traffic controls systems in upcoming smart cities. The street network is
considered as a highly-dynamic Cyber Physical System (CPS) where measured
information forms the foundation for dynamic control methods aiming to optimize
the overall system state. Apart from global system parameters like traffic flow
and density, specific data such as velocity of individual vehicles as well as
vehicle type information can be leveraged for highly sophisticated traffic
control methods like dynamic type-specific lane assignments. Consequently,
solutions for acquiring these kinds of information are required and have to
comply with strict requirements ranging from accuracy over cost-efficiency to
privacy preservation. In this paper, we present a system for classifying
vehicles based on their radio-fingerprint. In contrast to other approaches, the
proposed system is able to provide real-time capable and precise vehicle
classification as well as cost-efficient installation and maintenance, privacy
preservation and weather independence. The system performance in terms of
accuracy and resource-efficiency is evaluated in the field using comprehensive
measurements. Using a machine learning based approach, the resulting success
ratio for classifying cars and trucks is above 99%
LIDAR-Camera Fusion for Road Detection Using Fully Convolutional Neural Networks
In this work, a deep learning approach has been developed to carry out road
detection by fusing LIDAR point clouds and camera images. An unstructured and
sparse point cloud is first projected onto the camera image plane and then
upsampled to obtain a set of dense 2D images encoding spatial information.
Several fully convolutional neural networks (FCNs) are then trained to carry
out road detection, either by using data from a single sensor, or by using
three fusion strategies: early, late, and the newly proposed cross fusion.
Whereas in the former two fusion approaches, the integration of multimodal
information is carried out at a predefined depth level, the cross fusion FCN is
designed to directly learn from data where to integrate information; this is
accomplished by using trainable cross connections between the LIDAR and the
camera processing branches.
To further highlight the benefits of using a multimodal system for road
detection, a data set consisting of visually challenging scenes was extracted
from driving sequences of the KITTI raw data set. It was then demonstrated
that, as expected, a purely camera-based FCN severely underperforms on this
data set. A multimodal system, on the other hand, is still able to provide high
accuracy. Finally, the proposed cross fusion FCN was evaluated on the KITTI
road benchmark where it achieved excellent performance, with a MaxF score of
96.03%, ranking it among the top-performing approaches
Learning sound representations using trainable COPE feature extractors
Sound analysis research has mainly been focused on speech and music
processing. The deployed methodologies are not suitable for analysis of sounds
with varying background noise, in many cases with very low signal-to-noise
ratio (SNR). In this paper, we present a method for the detection of patterns
of interest in audio signals. We propose novel trainable feature extractors,
which we call COPE (Combination of Peaks of Energy). The structure of a COPE
feature extractor is determined using a single prototype sound pattern in an
automatic configuration process, which is a type of representation learning. We
construct a set of COPE feature extractors, configured on a number of training
patterns. Then we take their responses to build feature vectors that we use in
combination with a classifier to detect and classify patterns of interest in
audio signals. We carried out experiments on four public data sets: MIVIA audio
events, MIVIA road events, ESC-10 and TU Dortmund data sets. The results that
we achieved (recognition rate equal to 91.71% on the MIVIA audio events, 94% on
the MIVIA road events, 81.25% on the ESC-10 and 94.27% on the TU Dortmund)
demonstrate the effectiveness of the proposed method and are higher than the
ones obtained by other existing approaches. The COPE feature extractors have
high robustness to variations of SNR. Real-time performance is achieved even
when the value of a large number of features is computed.Comment: Accepted for publication in Pattern Recognitio
A machine learning approach to pedestrian detection for autonomous vehicles using High-Definition 3D Range Data
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).This work was partially supported by ViSelTR (ref. TIN2012-39279) and cDrone (ref. TIN2013-45920-R) projects of the Spanish Government, and the “Research Programme for Groups of Scientific Excellence at Region of Murcia” of the Seneca Foundation (Agency for Science and Technology of the Region of Murcia—19895/GERM/15). 3D LIDAR has been funded by UPCA13-3E-1929 infrastructure projects of the Spanish Government. Diego Alonso wishes to thank the Spanish Ministerio de Educación, Cultura y Deporte, Subprograma Estatal de Movilidad, Plan Estatal de Investigación Científica y Técnica y de Innovación 2013–2016 for grant CAS14/00238
- …