1,712 research outputs found

    Principal Component Analysis based Image Fusion Routine with Application to Stamping Split Detection

    Get PDF
    This dissertation presents a novel thermal and visible image fusion system with application in online automotive stamping split detection. The thermal vision system scans temperature maps of high reflective steel panels to locate abnormal temperature readings indicative of high local wrinkling pressure that causes metal splitting. The visible vision system offsets the blurring effect of thermal vision system caused by heat diffusion across the surface through conduction and heat losses to the surroundings through convection. The fusion of thermal and visible images combines two separate physical channels and provides more informative result image than the original ones. Principal Component Analysis (PCA) is employed for image fusion to transform original image to its eigenspace. By retaining the principal components with influencing eigenvalues, PCA keeps the key features in the original image and reduces noise level. Then a pixel level image fusion algorithm is developed to fuse images from the thermal and visible channels, enhance the result image from low level and increase the signal to noise ratio. Finally, an automatic split detection algorithm is designed and implemented to perform online objective automotive stamping split detection. The integrated PCA based image fusion system for stamping split detection is developed and tested on an automotive press line. It is also assessed by online thermal and visible acquisitions and illustrates performance and success. Different splits with variant shape, size and amount are detected under actual operating conditions

    Electroencephalogram (EEG)-based systems to monitor driver fatigue: a review

    Get PDF
    An efficient system that is capable to detect driver fatigue is urgently needed to help avoid road crashes. Recently, there has been an increase of interest in the application of electroencephalogram (EEG) to detect driver fatigue. Feature extraction and signal classification are the most critical steps in the EEG signal analysis. A reliable method for feature extraction is important to obtain robust signal classification. Meanwhile, a robust algorithm for signal classification will accurately classify the feature to a particular class. This paper concisely reviews the pros and cons of the existing techniques for feature extraction and signal classification and its fatigue detection accuracy performance. The integration of combined entropy (feature extraction) with support vector machine (SVM) and random forest (classifier) gives the best fatigue detection accuracy of 98.7% and 97.5% respectively. The outcomes from this study will guide future researchers in choosing a suitable technique for feature extraction and signal classification for EEG data processing and shed light on directions for future research and development of driver fatigue countermeasures

    Systems engineering approaches to safety in transport systems

    Get PDF
    openDuring driving, driver behavior monitoring may provide useful information to prevent road traffic accidents caused by driver distraction. It has been shown that 90% of road traffic accidents are due to human error and in 75% of these cases human error is the only cause. Car manufacturers have been interested in driver monitoring research for several years, aiming to enhance the general knowledge of driver behavior and to evaluate the functional state as it may drastically influence driving safety by distraction, fatigue, mental workload and attention. Fatigue and sleepiness at the wheel are well known risk factors for traffic accidents. The Human Factor (HF) plays a fundamental role in modern transport systems. Drivers and transport operators control a vehicle towards its destination in according to their own sense, physical condition, experience and ability, and safety strongly relies on the HF which has to take the right decisions. On the other hand, we are experiencing a gradual shift towards increasingly autonomous vehicles where HF still constitutes an important component, but may in fact become the "weakest link of the chain", requiring strong and effective training feedback. The studies that investigate the possibility to use biometrical or biophysical signals as data sources to evaluate the interaction between human brain activity and an electronic machine relate to the Human Machine Interface (HMI) framework. The HMI can acquire human signals to analyse the specific embedded structures and recognize the behavior of the subject during his/her interaction with the machine or with virtual interfaces as PCs or other communication systems. Based on my previous experience related to planning and monitoring of hazardous material transport, this work aims to create control models focused on driver behavior and changes of his/her physiological parameters. Three case studies have been considered using the interaction between an EEG system and external device, such as driving simulators or electronical components. A case study relates to the detection of the driver's behavior during a test driver. Another case study relates to the detection of driver's arm movements according to the data from the EEG during a driver test. The third case is the setting up of a Brain Computer Interface (BCI) model able to detect head movements in human participants by EEG signal and to control an electronic component according to the electrical brain activity due to head turning movements. Some videos showing the experimental results are available at https://www.youtube.com/channel/UCj55jjBwMTptBd2wcQMT2tg.openXXXIV CICLO - INFORMATICA E INGEGNERIA DEI SISTEMI/ COMPUTER SCIENCE AND SYSTEMS ENGINEERING - Ingegneria dei sistemiZero, Enric

    Impact of Temperament Types and Anger Intensity on Drivers\u27 EEG Power Spectrum and Sample Entropy: An On-road Evaluation Toward Road Rage Warning

    Get PDF
    "Road rage", also called driving anger, is becoming an increasingly common phenomenon affecting road safety in auto era as most of previous driving anger detection approaches based on physiological indicators are often unreliable due to the less consideration of drivers\u27 individual differences. This study aims to explore the impact of temperament types and anger intensity on drivers\u27 EEG characteristics. Thirty-two drivers with valid license were enrolled to perform on-road experiments on a particularly busy route on which a variety of provoking events like cutting in line of surrounding vehicle, jaywalking, occupying road of non-motor vehicle and traffic congestion frequently happened. Then, muti-factor analysis of variance (ANOVA) and post hoc analysis were utilized to study the impact of temperament types and anger intensity on drivers\u27 power spectrum and sample entropy of θ and β waves extracted from EEG signals. The study results firstly indicated that right frontal region of the brain has close relationship with driving anger. Secondly, there existed significant main effects of temperament types on power spectrum and sample entropy of β wave while significant main effects of anger intensity on power spectrum and sample entropy of θ and β wave were all observed. Thirdly, significant interactions between temperament types and anger intensity for power spectrum and sample entropy of β wave were both noted. Fourthly, with the increase of anger intensity, the power spectrum and sample entropy both decreased sufficiently for θ wave while increased remarkably for β wave. The study results can provide a theoretical support for designing a personalized and hierarchical warning system for road rage

    Detection of Driver Drowsiness and Distraction Using Computer Vision and Machine Learning Approaches

    Get PDF
    Drowsiness and distracted driving are leading factor in most car crashes and near-crashes. This research study explores and investigates the applications of both conventional computer vision and deep learning approaches for the detection of drowsiness and distraction in drivers. In the first part of this MPhil research study conventional computer vision approaches was studied to develop a robust drowsiness and distraction system based on yawning detection, head pose detection and eye blinking detection. These algorithms were implemented by using existing human crafted features. Experiments were performed for the detection and classification with small image datasets to evaluate and measure the performance of system. It was observed that the use of human crafted features together with a robust classifier such as SVM gives better performance in comparison to previous approaches. Though, the results were satisfactorily, there are many drawbacks and challenges associated with conventional computer vision approaches, such as definition and extraction of human crafted features, thus making these conventional algorithms to be subjective in nature and less adaptive in practice. In contrast, deep learning approaches automates the feature selection process and can be trained to learn the most discriminative features without any input from human. In the second half of this research study, the use of deep learning approaches for the detection of distracted driving was investigated. It was observed that one of the advantages of the applied methodology and technique for distraction detection includes and illustrates the contribution of CNN enhancement to a better pattern recognition accuracy and its ability to learn features from various regions of a human body simultaneously. The comparison of the performance of four convolutional deep net architectures (AlexNet, ResNet, MobileNet and NASNet) was carried out, investigated triplet training and explored the impact of combining a support vector classifier (SVC) with a trained deep net. The images used in our experiments with the deep nets are from the State Farm Distracted Driver Detection dataset hosted on Kaggle, each of which captures the entire body of a driver. The best results were obtained with the NASNet trained using triplet loss and combined with an SVC. It was observed that one of the advantages of deep learning approaches are their ability to learn discriminative features from various regions of a human body simultaneously. The ability has enabled deep learning approaches to reach accuracy at human level.

    Feature selection model based on EEG signals for assessing the cognitive workload in drivers

    Get PDF
    In recent years, research has focused on generating mechanisms to assess the levels of subjects’ cognitive workload when performing various activities that demand high concentration levels, such as driving a vehicle. These mechanisms have implemented several tools for analyzing the cognitive workload, and electroencephalographic (EEG) signals have been most frequently used due to their high precision. However, one of the main challenges in implementing the EEG signals is finding appropriate information for identifying cognitive states. Here, we present a new feature selection model for pattern recognition using information from EEG signals based on machine learning techniques called GALoRIS. GALoRIS combines Genetic Algorithms and Logistic Regression to create a new fitness function that identifies and selects the critical EEG features that contribute to recognizing high and low cognitive workloads and structures a new dataset capable of optimizing the model’s predictive process. We found that GALoRIS identifies data related to high and low cognitive workloads of subjects while driving a vehicle using information extracted from multiple EEG signals, reducing the original dataset by more than 50% and maximizing the model’s predictive capacity, achieving a precision rate greater than 90%.This work has been funded by the Ministry of Science, Innovation and Universities of Spain under grant number TRA2016-77012-RPeer ReviewedPostprint (published version

    Cross-Subject Emotion Recognition with Sparsely-Labeled Peripheral Physiological Data Using SHAP-Explained Tree Ensembles

    Full text link
    There are still many challenges of emotion recognition using physiological data despite the substantial progress made recently. In this paper, we attempted to address two major challenges. First, in order to deal with the sparsely-labeled physiological data, we first decomposed the raw physiological data using signal spectrum analysis, based on which we extracted both complexity and energy features. Such a procedure helped reduce noise and improve feature extraction effectiveness. Second, in order to improve the explainability of the machine learning models in emotion recognition with physiological data, we proposed Light Gradient Boosting Machine (LightGBM) and SHapley Additive exPlanations (SHAP) for emotion prediction and model explanation, respectively. The LightGBM model outperformed the eXtreme Gradient Boosting (XGBoost) model on the public Database for Emotion Analysis using Physiological signals (DEAP) with f1-scores of 0.814, 0.823, and 0.860 for binary classification of valence, arousal, and liking, respectively, with cross-subject validation using eight peripheral physiological signals. Furthermore, the SHAP model was able to identify the most important features in emotion recognition, and revealed the relationships between the predictor variables and the response variables in terms of their main effects and interaction effects. Therefore, the results of the proposed model not only had good performance using peripheral physiological data, but also gave more insights into the underlying mechanisms in recognizing emotions

    Improving Engagement Assessment by Model Individualization and Deep Learning

    Get PDF
    This dissertation studies methods that improve engagement assessment for pilots. The major work addresses two challenging problems involved in the assessment: individual variation among pilots and the lack of labeled data for training assessment models. Task engagement is usually assessed by analyzing physiological measurements collected from subjects who are performing a task. However, physiological measurements such as Electroencephalography (EEG) vary from subject to subject. An assessment model trained for one subject may not be applicable to other subjects. We proposed a dynamic classifier selection algorithm for model individualization and compared it to other two methods: base line normalization and similarity-based model replacement. Experimental results showed that baseline normalization and dynamic classifier selection can significantly improve cross-subject engagement assessment. For complex tasks such as piloting an air plane, labeling engagement levels for pilots is challenging. Without enough labeled data, it is very difficult for traditional methods to train valid models for effective engagement assessment. This dissertation proposed to utilize deep learning models to address this challenge. Deep learning models are capable of learning valuable feature hierarchies by taking advantage of both labeled and unlabeled data. Our results showed that deep models are better tools for engagement assessment when label information is scarce. To further verify the power of deep learning techniques for scarce labeled data, we applied the deep learning algorithm to another small size data set, the ADNI data set. The ADNI data set is a public data set containing MRI and PET scans of Alzheimer\u27s Disease (AD) patients for AD diagnosis. We developed a robust deep learning system incorporating dropout and stability selection techniques to identify the different progression stages of AD patients. The experimental results showed that deep learning is very effective in AD diagnosis. In addition, we studied several imbalance learning techniques that are useful when data is highly unbalanced, i.e., when majority classes have many more training samples than minority classes. Conventional machine learning techniques usually tend to classify all data samples into majority classes and to perform poorly for minority classes. Unbalanced learning techniques can balance data sets before training and can improve learning performance
    • …
    corecore