3,709 research outputs found
Practical classification of different moving targets using automotive radar and deep neural networks
In this work, the authors present results for classification of different classes of targets (car, single and multiple people, bicycle) using automotive radar data and different neural networks. A fast implementation of radar algorithms for detection, tracking, and micro-Doppler extraction is proposed in conjunction with the automotive radar transceiver TEF810X and microcontroller unit SR32R274 manufactured by NXP Semiconductors. Three different types of neural networks are considered, namely a classic convolutional network, a residual network, and a combination of convolutional and recurrent network, for different classification problems across the four classes of targets recorded. Considerable accuracy (close to 100% in some cases) and low latency of the radar pre-processing prior to classification (∼0.55 s to produce a 0.5 s long spectrogram) are demonstrated in this study, and possible shortcomings and outstanding issues are discussed
An Empirical Evaluation of Deep Learning on Highway Driving
Numerous groups have applied a variety of deep learning techniques to
computer vision problems in highway perception scenarios. In this paper, we
presented a number of empirical evaluations of recent deep learning advances.
Computer vision, combined with deep learning, has the potential to bring about
a relatively inexpensive, robust solution to autonomous driving. To prepare
deep learning for industry uptake and practical applications, neural networks
will require large data sets that represent all possible driving environments
and scenarios. We collect a large data set of highway data and apply deep
learning and computer vision algorithms to problems such as car and lane
detection. We show how existing convolutional neural networks (CNNs) can be
used to perform lane and vehicle detection while running at frame rates
required for a real-time system. Our results lend credence to the hypothesis
that deep learning holds promise for autonomous driving.Comment: Added a video for lane detectio
Recommended from our members
SAR object classification using the DAE with a modified triplet restriction
Micro-Doppler Based Human-Robot Classification Using Ensemble and Deep Learning Approaches
Radar sensors can be used for analyzing the induced frequency shifts due to
micro-motions in both range and velocity dimensions identified as micro-Doppler
(-D) and micro-Range (-R), respectively.
Different moving targets will have unique -D and
-R signatures that can be used for target classification.
Such classification can be used in numerous fields, such as gait recognition,
safety and surveillance. In this paper, a 25 GHz FMCW Single-Input
Single-Output (SISO) radar is used in industrial safety for real-time
human-robot identification. Due to the real-time constraint, joint
Range-Doppler (R-D) maps are directly analyzed for our classification problem.
Furthermore, a comparison between the conventional classical learning
approaches with handcrafted extracted features, ensemble classifiers and deep
learning approaches is presented. For ensemble classifiers, restructured range
and velocity profiles are passed directly to ensemble trees, such as gradient
boosting and random forest without feature extraction. Finally, a Deep
Convolutional Neural Network (DCNN) is used and raw R-D images are directly fed
into the constructed network. DCNN shows a superior performance of 99\%
accuracy in identifying humans from robots on a single R-D map.Comment: 6 pages, accepted in IEEE Radar Conference 201
Recommended from our members
PERSIANN-CNN: Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks-Convolutional Neural Networks
Abstract
Accurate and timely precipitation estimates are critical for monitoring and forecasting natural disasters such as floods. Despite having high-resolution satellite information, precipitation estimation from remotely sensed data still suffers from methodological limitations. State-of-the-art deep learning algorithms, renowned for their skill in learning accurate patterns within large and complex datasets, appear well suited to the task of precipitation estimation, given the ample amount of high-resolution satellite data. In this study, the effectiveness of applying convolutional neural networks (CNNs) together with the infrared (IR) and water vapor (WV) channels from geostationary satellites for estimating precipitation rate is explored. The proposed model performances are evaluated during summer 2012 and 2013 over central CONUS at the spatial resolution of 0.08° and at an hourly time scale. Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN)–Cloud Classification System (CCS), which is an operational satellite-based product, and PERSIANN–Stacked Denoising Autoencoder (PERSIANN-SDAE) are employed as baseline models. Results demonstrate that the proposed model (PERSIANN-CNN) provides more accurate rainfall estimates compared to the baseline models at various temporal and spatial scales. Specifically, PERSIANN-CNN outperforms PERSIANN-CCS (and PERSIANN-SDAE) by 54% (and 23%) in the critical success index (CSI), demonstrating the detection skills of the model. Furthermore, the root-mean-square error (RMSE) of the rainfall estimates with respect to the National Centers for Environmental Prediction (NCEP) Stage IV gauge–radar data, for PERSIANN-CNN was lower than that of PERSIANN-CCS (PERSIANN-SDAE) by 37% (14%), showing the estimation accuracy of the proposed model
- …