667 research outputs found
Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring
Artificially intelligent perception is increasingly present in the lives of
every one of us. Vehicles are no exception, (...) In the near future, pattern
recognition will have an even stronger role in vehicles, as self-driving cars
will require automated ways to understand what is happening around (and within)
them and act accordingly. (...) This doctoral work focused on advancing
in-vehicle sensing through the research of novel computer vision and pattern
recognition methodologies for both biometrics and wellbeing monitoring. The
main focus has been on electrocardiogram (ECG) biometrics, a trait well-known
for its potential for seamless driver monitoring. Major efforts were devoted to
achieving improved performance in identification and identity verification in
off-the-person scenarios, well-known for increased noise and variability. Here,
end-to-end deep learning ECG biometric solutions were proposed and important
topics were addressed such as cross-database and long-term performance,
waveform relevance through explainability, and interlead conversion. Face
biometrics, a natural complement to the ECG in seamless unconstrained
scenarios, was also studied in this work. The open challenges of masked face
recognition and interpretability in biometrics were tackled in an effort to
evolve towards algorithms that are more transparent, trustworthy, and robust to
significant occlusions. Within the topic of wellbeing monitoring, improved
solutions to multimodal emotion recognition in groups of people and
activity/violence recognition in in-vehicle scenarios were proposed. At last,
we also proposed a novel way to learn template security within end-to-end
models, dismissing additional separate encryption processes, and a
self-supervised learning approach tailored to sequential data, in order to
ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022
to the University of Port
Artificial Intelligence for the Edge Computing Paradigm.
With modern technologies moving towards the internet of things where seemingly every financial, private, commercial and medical transaction being carried out by portable and intelligent devices; Machine Learning has found its way into every smart device and application possible. However, Machine Learning cannot be used on the edge directly due to the limited capabilities of small and battery-powered modules. Therefore, this thesis aims to provide light-weight automated Machine Learning models which are applied on a standard edge device, the Raspberry Pi, where one framework aims to limit parameter tuning while automating feature extraction and a second which can perform Machine Learning classification on the edge traditionally, and can be used additionally for image-based explainable Artificial Intelligence. Also, a commercial Artificial Intelligence software have been ported to work in a client/server setups on the Raspberry Pi board where it was incorporated in all of the Machine Learning frameworks which will be presented in this thesis. This dissertation also introduces multiple algorithms that can convert images into Time-series for classification and explainability but also introduces novel Time-series feature extraction algorithms that are applied to biomedical data while introducing the concept of the Activation Engine, which is a post-processing block that tunes Neural Networks without the need of particular experience in Machine Leaning. Also, a tree-based method for multiclass classification has been introduced which outperforms the One-to-Many approach while being less complex that the One-to-One method.\par
The results presented in this thesis exhibit high accuracy when compared with the literature, while remaining efficient in terms of power consumption and the time of inference. Additionally the concepts, methods or algorithms that were introduced are particularly novel technically, where they include:
• Feature extraction of professionally annotated, and poorly annotated time-series.
• The introduction of the Activation Engine post-processing block.
• A model for global image explainability with inference on the edge.
• A tree-based algorithm for multiclass classification
Cross-Subject Emotion Recognition with Sparsely-Labeled Peripheral Physiological Data Using SHAP-Explained Tree Ensembles
There are still many challenges of emotion recognition using physiological
data despite the substantial progress made recently. In this paper, we
attempted to address two major challenges. First, in order to deal with the
sparsely-labeled physiological data, we first decomposed the raw physiological
data using signal spectrum analysis, based on which we extracted both
complexity and energy features. Such a procedure helped reduce noise and
improve feature extraction effectiveness. Second, in order to improve the
explainability of the machine learning models in emotion recognition with
physiological data, we proposed Light Gradient Boosting Machine (LightGBM) and
SHapley Additive exPlanations (SHAP) for emotion prediction and model
explanation, respectively. The LightGBM model outperformed the eXtreme Gradient
Boosting (XGBoost) model on the public Database for Emotion Analysis using
Physiological signals (DEAP) with f1-scores of 0.814, 0.823, and 0.860 for
binary classification of valence, arousal, and liking, respectively, with
cross-subject validation using eight peripheral physiological signals.
Furthermore, the SHAP model was able to identify the most important features in
emotion recognition, and revealed the relationships between the predictor
variables and the response variables in terms of their main effects and
interaction effects. Therefore, the results of the proposed model not only had
good performance using peripheral physiological data, but also gave more
insights into the underlying mechanisms in recognizing emotions
Proceedings, MSVSCC 2019
Old Dominion University Department of Modeling, Simulation & Visualization Engineering (MSVE) and the Virginia Modeling, Analysis and Simulation Center (VMASC) held the 13th annual Modeling, Simulation & Visualization (MSV) Student Capstone Conference on April 18, 2019.
The Conference featured student research and student projects that are central to MSV. Also participating in the conference were faculty members who volunteered their time to impart direct support to their students’ research, facilitated the various conference tracks, served as judges for each of the tracks, and provided overall assistance to the conference.
Appreciating the purpose of the conference and working in a cohesive, collaborative effort, resulted in a successful symposium for everyone involved. These proceedings feature the works that were presented at the conference.
Capstone Conference Chair: Dr. Yuzhong Shen Capstone Conference Student Chair: Daniel Pere
Deep Learning Based Malware Classification Using Deep Residual Network
The traditional malware detection approaches rely heavily on feature extraction procedure, in this paper we proposed a deep learning-based malware classification model by using a 18-layers deep residual network. Our model uses the raw bytecodes data of malware samples, converting the bytecodes to 3-channel RGB images and then applying the deep learning techniques to classify the malwares. Our experiment results show that the deep residual network model achieved an average accuracy of 86.54% by 5-fold cross validation. Comparing to the traditional methods for malware classification, our deep residual network model greatly simplify the malware detection and classification procedures, it achieved a very good classification accuracy as well. The dataset we used in this paper for training and testing is Malimg dataset, one of the biggest malware datasets released by vision research lab of UCSB
Emotion and Stress Recognition Related Sensors and Machine Learning Technologies
This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective
Advances in Automated Driving Systems
Electrification, automation of vehicle control, digitalization and new mobility are the mega-trends in automotive engineering, and they are strongly connected. While many demonstrations for highly automated vehicles have been made worldwide, many challenges remain in bringing automated vehicles to the market for private and commercial use. The main challenges are as follows: reliable machine perception; accepted standards for vehicle-type approval and homologation; verification and validation of the functional safety, especially at SAE level 3+ systems; legal and ethical implications; acceptance of vehicle automation by occupants and society; interaction between automated and human-controlled vehicles in mixed traffic; human–machine interaction and usability; manipulation, misuse and cyber-security; the system costs of hard- and software and development efforts. This Special Issue was prepared in the years 2021 and 2022 and includes 15 papers with original research related to recent advances in the aforementioned challenges. The topics of this Special Issue cover: Machine perception for SAE L3+ driving automation; Trajectory planning and decision-making in complex traffic situations; X-by-Wire system components; Verification and validation of SAE L3+ systems; Misuse, manipulation and cybersecurity; Human–machine interactions, driver monitoring and driver-intention recognition; Road infrastructure measures for the introduction of SAE L3+ systems; Solutions for interactions between human- and machine-controlled vehicles in mixed traffic
Biometric Systems
Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study
WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM
Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments
- …