8,056 research outputs found
Implementation and Evaluation of a Cooperative Vehicle-to-Pedestrian Safety Application
While the development of Vehicle-to-Vehicle (V2V) safety applications based
on Dedicated Short-Range Communications (DSRC) has been extensively undergoing
standardization for more than a decade, such applications are extremely missing
for Vulnerable Road Users (VRUs). Nonexistence of collaborative systems between
VRUs and vehicles was the main reason for this lack of attention. Recent
developments in Wi-Fi Direct and DSRC-enabled smartphones are changing this
perspective. Leveraging the existing V2V platforms, we propose a new framework
using a DSRC-enabled smartphone to extend safety benefits to VRUs. The
interoperability of applications between vehicles and portable DSRC enabled
devices is achieved through the SAE J2735 Personal Safety Message (PSM).
However, considering the fact that VRU movement dynamics, response times, and
crash scenarios are fundamentally different from vehicles, a specific framework
should be designed for VRU safety applications to study their performance. In
this article, we first propose an end-to-end Vehicle-to-Pedestrian (V2P)
framework to provide situational awareness and hazard detection based on the
most common and injury-prone crash scenarios. The details of our VRU safety
module, including target classification and collision detection algorithms, are
explained next. Furthermore, we propose and evaluate a mitigating solution for
congestion and power consumption issues in such systems. Finally, the whole
system is implemented and analyzed for realistic crash scenarios
Enhanced Machine Learning Techniques for Early HARQ Feedback Prediction in 5G
We investigate Early Hybrid Automatic Repeat reQuest (E-HARQ) feedback
schemes enhanced by machine learning techniques as a path towards
ultra-reliable and low-latency communication (URLLC). To this end, we propose
machine learning methods to predict the outcome of the decoding process ahead
of the end of the transmission. We discuss different input features and
classification algorithms ranging from traditional methods to newly developed
supervised autoencoders. These methods are evaluated based on their prospects
of complying with the URLLC requirements of effective block error rates below
at small latency overheads. We provide realistic performance
estimates in a system model incorporating scheduling effects to demonstrate the
feasibility of E-HARQ across different signal-to-noise ratios, subcode lengths,
channel conditions and system loads, and show the benefit over regular HARQ and
existing E-HARQ schemes without machine learning.Comment: 14 pages, 15 figures; accepted versio
Taking a Deeper Look at Pedestrians
In this paper we study the use of convolutional neural networks (convnets)
for the task of pedestrian detection. Despite their recent diverse successes,
convnets historically underperform compared to other pedestrian detectors. We
deliberately omit explicitly modelling the problem into the network (e.g. parts
or occlusion modelling) and show that we can reach competitive performance
without bells and whistles. In a wide range of experiments we analyse small and
big convnets, their architectural choices, parameters, and the influence of
different training data, including pre-training on surrogate tasks.
We present the best convnet detectors on the Caltech and KITTI dataset. On
Caltech our convnets reach top performance both for the Caltech1x and
Caltech10x training setup. Using additional data at training time our strongest
convnet model is competitive even to detectors that use additional data
(optical flow) at test time
Unsupervised Domain Adaptation for Multispectral Pedestrian Detection
Multimodal information (e.g., visible and thermal) can generate robust
pedestrian detections to facilitate around-the-clock computer vision
applications, such as autonomous driving and video surveillance. However, it
still remains a crucial challenge to train a reliable detector working well in
different multispectral pedestrian datasets without manual annotations. In this
paper, we propose a novel unsupervised domain adaptation framework for
multispectral pedestrian detection, by iteratively generating pseudo
annotations and updating the parameters of our designed multispectral
pedestrian detector on target domain. Pseudo annotations are generated using
the detector trained on source domain, and then updated by fixing the
parameters of detector and minimizing the cross entropy loss without
back-propagation. Training labels are generated using the pseudo annotations by
considering the characteristics of similarity and complementarity between
well-aligned visible and infrared image pairs. The parameters of detector are
updated using the generated labels by minimizing our defined multi-detection
loss function with back-propagation. The optimal parameters of detector can be
obtained after iteratively updating the pseudo annotations and parameters.
Experimental results show that our proposed unsupervised multimodal domain
adaptation method achieves significantly higher detection performance than the
approach without domain adaptation, and is competitive with the supervised
multispectral pedestrian detectors
- …