198 research outputs found
Embedded System Performance Analysis for Implementing a Portable Drowsiness Detection System for Drivers
Drowsiness on the road is a widespread problem with fatal consequences; thus,
a multitude of systems and techniques have been proposed. Among existing
methods, Ghoddoosian et al. utilized temporal blinking patterns to detect early
signs of drowsiness, but their algorithm was tested only on a powerful desktop
computer, which is not practical to apply in a moving vehicle setting. In this
paper, we propose an efficient platform to run Ghoddosian's algorithm, detail
the performance tests we ran to determine this platform, and explain our
threshold optimization logic. After considering the Jetson Nano and Beelink
(Mini PC), we concluded that the Mini PC is the most efficient and practical to
run our embedded system in a vehicle. To determine this, we ran communication
speed tests and evaluated total processing times for inference operations.
Based on our experiments, the average total processing time to run the
drowsiness detection model was 94.27 ms for Jetson Nano and 22.73 ms for the
Beelink (Mini PC). Considering the portability and power efficiency of each
device, along with the processing time results, the Beelink (Mini PC) was
determined to be most suitable. Also, we propose a threshold optimization
algorithm, which determines whether the driver is drowsy or alert based on the
trade-off between the sensitivity and specificity of the drowsiness detection
model. Our study will serve as a crucial next step for drowsiness detection
research and its application in vehicles. Through our experiment, we have
determinend a favorable platform that can run drowsiness detection algorithms
in real-time and can be used as a foundation to further advance drowsiness
detection research. In doing so, we have bridged the gap between an existing
embedded system and its actual implementation in vehicles to bring drowsiness
technology a step closer to prevalent real-life implementation.Comment: 26 pages, 13 figures, 4 table
Visual Saliency Detection in Advanced Driver Assistance Systems
Visual Saliency refers to the innate human mechanism of focusing on and
extracting important features from the observed environment. Recently, there
has been a notable surge of interest in the field of automotive research
regarding the estimation of visual saliency. While operating a vehicle, drivers
naturally direct their attention towards specific objects, employing
brain-driven saliency mechanisms that prioritize certain elements over others.
In this investigation, we present an intelligent system that combines a
drowsiness detection system for drivers with a scene comprehension pipeline
based on saliency. To achieve this, we have implemented a specialized 3D deep
network for semantic segmentation, which has been pretrained and tailored for
processing the frames captured by an automotive-grade external camera. The
proposed pipeline was hosted on an embedded platform utilizing the STA1295
core, featuring ARM A7 dual-cores, and embeds an hardware accelerator.
Additionally, we employ an innovative biosensor embedded on the car steering
wheel to monitor the driver drowsiness, gathering the PhotoPlethysmoGraphy
(PPG) signal of the driver. A dedicated 1D temporal deep convolutional network
has been devised to classify the collected PPG time-series, enabling us to
assess the driver level of attentiveness. Ultimately, we compare the determined
attention level of the driver with the corresponding saliency-based scene
classification to evaluate the overall safety level. The efficacy of the
proposed pipeline has been validated through extensive experimental results
Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring
Artificially intelligent perception is increasingly present in the lives of
every one of us. Vehicles are no exception, (...) In the near future, pattern
recognition will have an even stronger role in vehicles, as self-driving cars
will require automated ways to understand what is happening around (and within)
them and act accordingly. (...) This doctoral work focused on advancing
in-vehicle sensing through the research of novel computer vision and pattern
recognition methodologies for both biometrics and wellbeing monitoring. The
main focus has been on electrocardiogram (ECG) biometrics, a trait well-known
for its potential for seamless driver monitoring. Major efforts were devoted to
achieving improved performance in identification and identity verification in
off-the-person scenarios, well-known for increased noise and variability. Here,
end-to-end deep learning ECG biometric solutions were proposed and important
topics were addressed such as cross-database and long-term performance,
waveform relevance through explainability, and interlead conversion. Face
biometrics, a natural complement to the ECG in seamless unconstrained
scenarios, was also studied in this work. The open challenges of masked face
recognition and interpretability in biometrics were tackled in an effort to
evolve towards algorithms that are more transparent, trustworthy, and robust to
significant occlusions. Within the topic of wellbeing monitoring, improved
solutions to multimodal emotion recognition in groups of people and
activity/violence recognition in in-vehicle scenarios were proposed. At last,
we also proposed a novel way to learn template security within end-to-end
models, dismissing additional separate encryption processes, and a
self-supervised learning approach tailored to sequential data, in order to
ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022
to the University of Port
- …