4,033 research outputs found

    EEG-based cognitive control behaviour assessment: an ecological study with professional air traffic controllers

    Get PDF
    Several models defining different types of cognitive human behaviour are available. For this work, we have selected the Skill, Rule and Knowledge (SRK) model proposed by Rasmussen in 1983. This model is currently broadly used in safety critical domains, such as the aviation. Nowadays, there are no tools able to assess at which level of cognitive control the operator is dealing with the considered task, that is if he/she is performing the task as an automated routine (skill level), as procedures-based activity (rule level), or as a problem-solving process (knowledge level). Several studies tried to model the SRK behaviours from a Human Factor perspective. Despite such studies, there are no evidences in which such behaviours have been evaluated from a neurophysiological point of view, for example, by considering brain activity variations across the different SRK levels. Therefore, the proposed study aimed to investigate the use of neurophysiological signals to assess the cognitive control behaviours accordingly to the SRK taxonomy. The results of the study, performed on 37 professional Air Traffic Controllers, demonstrated that specific brain features could characterize and discriminate the different SRK levels, therefore enabling an objective assessment of the degree of cognitive control behaviours in realistic setting

    Online Multi-Stage Deep Architectures for Feature Extraction and Object Recognition

    Get PDF
    Multi-stage visual architectures have recently found success in achieving high classification accuracies over image datasets with large variations in pose, lighting, and scale. Inspired by techniques currently at the forefront of deep learning, such architectures are typically composed of one or more layers of preprocessing, feature encoding, and pooling to extract features from raw images. Training these components traditionally relies on large sets of patches that are extracted from a potentially large image dataset. In this context, high-dimensional feature space representations are often helpful for obtaining the best classification performances and providing a higher degree of invariance to object transformations. Large datasets with high-dimensional features complicate the implementation of visual architectures in memory constrained environments. This dissertation constructs online learning replacements for the components within a multi-stage architecture and demonstrates that the proposed replacements (namely fuzzy competitive clustering, an incremental covariance estimator, and multi-layer neural network) can offer performance competitive with their offline batch counterparts while providing a reduced memory footprint. The online nature of this solution allows for the development of a method for adjusting parameters within the architecture via stochastic gradient descent. Testing over multiple datasets shows the potential benefits of this methodology when appropriate priors on the initial parameters are unknown. Alternatives to batch based decompositions for a whitening preprocessing stage which take advantage of natural image statistics and allow simple dictionary learners to work well in the problem domain are also explored. Expansions of the architecture using additional pooling statistics and multiple layers are presented and indicate that larger codebook sizes are not the only step forward to higher classification accuracies. Experimental results from these expansions further indicate the important role of sparsity and appropriate encodings within multi-stage visual feature extraction architectures

    OPML: A One-Pass Closed-Form Solution for Online Metric Learning

    Get PDF
    To achieve a low computational cost when performing online metric learning for large-scale data, we present a one-pass closed-form solution namely OPML in this paper. Typically, the proposed OPML first adopts a one-pass triplet construction strategy, which aims to use only a very small number of triplets to approximate the representation ability of whole original triplets obtained by batch-manner methods. Then, OPML employs a closed-form solution to update the metric for new coming samples, which leads to a low space (i.e., O(d)O(d)) and time (i.e., O(d2)O(d^2)) complexity, where dd is the feature dimensionality. In addition, an extension of OPML (namely COPML) is further proposed to enhance the robustness when in real case the first several samples come from the same class (i.e., cold start problem). In the experiments, we have systematically evaluated our methods (OPML and COPML) on three typical tasks, including UCI data classification, face verification, and abnormal event detection in videos, which aims to fully evaluate the proposed methods on different sample number, different feature dimensionalities and different feature extraction ways (i.e., hand-crafted and deeply-learned). The results show that OPML and COPML can obtain the promising performance with a very low computational cost. Also, the effectiveness of COPML under the cold start setting is experimentally verified.Comment: 12 page

    The Use of EEG Signals For Biometric Person Recognition

    Get PDF
    This work is devoted to investigating EEG-based biometric recognition systems. One potential advantage of using EEG signals for person recognition is the difficulty in generating artificial signals with biometric characteristics, thus making the spoofing of EEG-based biometric systems a challenging task. However, more works needs to be done to overcome certain drawbacks that currently prevent the adoption of EEG biometrics in real-life scenarios: 1) usually large number of employed sensors, 2) still relatively low recognition rates (compared with some other biometric modalities), 3) the template ageing effect. The existing shortcomings of EEG biometrics and their possible solutions are addressed from three main perspectives in the thesis: pre-processing, feature extraction and pattern classification. In pre-processing, task (stimuli) sensitivity and noise removal are investigated and discussed in separated chapters. For feature extraction, four novel features are proposed; for pattern classification, a new quality filtering method, and a novel instance-based learning algorithm are described in respective chapters. A self-collected database (Mobile Sensor Database) is employed to investigate some important biometric specified effects (e.g. the template ageing effect; using low-cost sensor for recognition). In the research for pre-processing, a training data accumulation scheme is developed, which improves the recognition performance by combining the data of different mental tasks for training; a new wavelet-based de-noising method is developed, its effectiveness in person identification is found to be considerable. Two novel features based on Empirical Mode Decomposition and Hilbert Transform are developed, which provided the best biometric performance amongst all the newly proposed features and other state-of-the-art features reported in the thesis; the other two newly developed wavelet-based features, while having slightly lower recognition accuracies, were computationally more efficient. The quality filtering algorithm is designed to employ the most informative EEG signal segments: experimental results indicate using a small subset of the available data for feature training could receive reasonable improvement in identification rate. The proposed instance-based template reconstruction learning algorithm has shown significant effectiveness when tested using both the publicly available and self-collected databases

    Particle size distribution based on deep learning instance segmentation

    Get PDF
    Abstract. Deep learning has become one of the most important topics in Computer Science, and recently it proved to deliver outstanding performances in the field of Computer Vision, ranging from image classification and object detection to instance segmentation and panoptic segmentation. However, most of these results were obtained on large, publicly available datasets, that exhibit a low level of scene complexity. Less is known about applying deep neural networks to images acquired in industrial settings, where data is available in limited amounts. Moreover, comparing an image-based measurement boosted by deep learning to an established reference method can pave the way towards a shift in industrial measurements. This thesis hypothesizes that the particle size distribution can be estimated by employing a deep neural network to segment the particles of interest. The analysis was performed on two deep neural networks, comparing the results of the instance segmentation and the resulted size distributions. First, the data was manually labelled by selecting apatite and phlogopite particles, formulating the problem as a two-class instance segmentation task. Next, models were trained based on the two architectures and then used for predicting instances of particles on previously unseen images. Ultimately, accumulating the sizes of the predicted particles would result in a particle size distribution for a given dataset. The final results validated the hypothesis to some extent and showed that tackling difficult and complex challenges in the industry by leveraging state-of-the-art deep learning neural networks leads to promising results. The system was able to correctly identify most of the particles, even in challenging situations. The resulted particle size distribution was also compared to a reference measurement obtained by the laser diffraction method, but still further research and experiments are required in order to properly compare the two methods. The two evaluated architectures yielded great results, with relatively small amounts of annotated data
    • …
    corecore