7 research outputs found
Alternative vehicle electronic architecture for individual wheel control
Electronic control systems have become an integral part of the modern vehicle and
their installation rate is still on a sharp rise. Their application areas range from
powertrain, chassis and body control to entertainment. Each system is conventionally
control led by a centralised controller with hard-wired links to sensors and actuators. As
systems have become more complex, a rise in the number of system components and
amount of wiring harness has followed. This leads to serious problems on safety,
reliability and space limitation. Different networking and vehicle electronic architectures
have been developed by others to ease these problems. The thesis proposes an alternative
architecture namely Distributed Wheel Architecture, for its potential benefits in terms of
vehicle dynamics, safety and ease of functional addition. The architecture would have a
networked controller on each wheel to perform its dynamic control including braking,
suspension and steering.
The project involves conducting a preliminary study and comparing the proposed
architecture with four alternative existing or high potential architectures. The areas of
study are functionality, complexity, and reliability.
Existing ABS, active suspension and four wheel steering systems are evaluated in
this work by simulation of their operations using road test data. They are used as
exemplary systems, for modelling of the new electronic architecture together with the
four alternatives. A prediction technique is developed, based on the derivation of software
pseudo code from system specifications, to estimate the microcontroller specifications of
all the system ECUs. The estimate indicates the feasibility of implementing the
architectures using current microcontrollers. Message transfer on the Controller Area
Network (CAN) of each architecture is simulated to find its associated delays, and hence
the feasibility of installing CAN in the architectures. Architecture component costs are
estimated from the costs of wires, ECUs, sensors and actuators. The number of wires is
obtained from the wiring models derived from exemplary system data. ECU peripheral
component counts are estimated from their statistical plot against the number of ECU
pins of collected ECUs. Architecture component reliability is estimated based on two
established reliability handbooks.
The results suggest that all of the five architectures could be implemented using
present microcontrollers. In addition, critical data transfer via CAN is made within time
limits under current levels of message load, indicating the possibility of installing CAN in
these architectures. The proposed architecture is expected to· be costlier in terms of
components than the rest of the architectures, while it is among the leaders for wiring
weight saving. However, it is expected to suffer from a relatively higher probability of
system component failure.
The proposed architecture is found not economically viable at present, but shows
potential in reducing vehicle wire and weight problems
EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation
Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN) to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE) using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA) is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers
EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation
Comparison of EEG measurement of upper limb movement in motor imagery training system
Abstract Background One of the most promising applications for electroencephalogram (EEG)-based brain computer interface is for stroke rehabilitation. Implemented as a standalone motor imagery (MI) training system or as part of a rehabilitation robotic system, many studies have shown benefits of using them to restore motor control in stroke patients. Hand movements have widely been chosen as MI tasks. Although potentially more challenging to analyze, wrist and forearm movement such as wrist flexion/extension and forearm pronation/supination should also be considered for MI tasks, because these movements are part of the main exercises given to patients in conventional stroke rehabilitation. This paper will evaluate the effectiveness of such movements for MI tasks. Methods Three hand and wrist movement tasks which were hand opening/closing, wrist flexion/extension and forearm pronation/supination were chosen as motor imagery tasks for both hands. Eleven subjects participated in the experiment. All of them completed hand opening/closing task session. Ten subjects completed two MI task sessions which were hand opening/closing and wrist flexion/extension. Five subjects completed all three MI tasks sessions. Each MI task comprised 8 sessions spanning a 4 weeks period. For classification, feature extraction based on common spatial pattern (CSP) algorithm was used. Two types were implemented, one with conventional CSP (termed WB) and one with an increase number of features achieved by filtering EEG data into five bands (termed FB). Classification was done by linear discriminant analysis (LDA) and support vector machine (SVM). Results Eight-fold cross validation was applied on EEG data. LDA and SVM gave comparable classification accuracy. FB achieved significantly higher classification accuracy compared to WB. The accuracy of classifying wrist flexion/extension task were higher than that of classifying hand opening/closing task in all subjects. Classifying forearm pronation/supination task achieved higher accuracy than classifying hand opening/closing task in most subjects but achieved lower accuracy than classifying wrist flexion/extension task in all subjects. Significant improvements of classification accuracy were found in nine subjects when considering individual sessions of experiments of all MI tasks. The results of classifying hand opening/closing task and wrist flexion/extension task were comparable to the results of classifying hand opening/closing task and forearm pronation/supination task. Classification accuracy of wrist flexion/extension task and forearm pronation/supination task was lower than those of hand movement tasks and wrist movement tasks. Conclusion High classification accuracy of the three MI tasks support the possibility of using EEG-based stroke rehabilitation system with these movements. Either LDA or SVM can equally be chosen as a classifier since the difference of their accuracies is not statistically significant. Significantly higher classification accuracy made FB more suitable for classifying MI task compared to WB. More training sessions could potentially lead to better accuracy as evident in most subjects in this experiment
Automatic Speech Discrimination Assessment Methods Based on Event-Related Potentials (ERP)
Speech discrimination is used by audiologists in diagnosing and determining treatment for hearing loss patients. Usually, assessing speech discrimination requires subjective responses. Using electroencephalography (EEG), a method that is based on event-related potentials (ERPs), could provide objective speech discrimination. In this work we proposed a visual-ERP-based method to assess speech discrimination using pictures that represent word meaning. The proposed method was implemented with three strategies, each with different number of pictures and test sequences. Machine learning was adopted to classify between the task conditions based on features that were extracted from EEG signals. The results from the proposed method were compared to that of a similar visual-ERP-based method using letters and a method that is based on the auditory mismatch negativity (MMN) component. The P3 component and the late positive potential (LPP) component were observed in the two visual-ERP-based methods while MMN was observed during the MMN-based method. A total of two out of three strategies of the proposed method, along with the MMN-based method, achieved approximately 80% average classification accuracy by a combination of support vector machine (SVM) and common spatial pattern (CSP). Potentially, these methods could serve as a pre-screening tool to make speech discrimination assessment more accessible, particularly in areas with a shortage of audiologists
<p>A game-based neurofeedback training system to enhance cognitive performance in healthy elderly subjects and in patients with amnestic mild cognitive impairment</p>
Real-time EEGbased happiness detection system,”The
We propose to use real-time EEG signal to classify happy and unhappy emotions elicited by pictures and classical music. We use PSD as a feature and SVM as a classifier. The average accuracies of subject-dependent model and subject-independent model are approximately 75.62% and 65.12%, respectively. Considering each pair of channels, temporal pair of channels (T7 and T8) gives a better result than the other area. Considering different frequency bands, high-frequency bands (Beta and Gamma) give a better result than low-frequency bands. Considering different time durations for emotion elicitation, that result from 30 seconds does not have significant difference compared with the result from 60 seconds. From all of these results, we implement real-time EEG-based happiness detection system using only one pair of channels. Furthermore, we develop games based on the happiness detection system to help user recognize and control the happiness