326 research outputs found

    Deep learning for automated sleep monitoring

    Get PDF
    Wearable electroencephalography (EEG) is a technology that is revolutionising the longitudinal monitoring of neurological and mental disorders, improving the quality of life of patients and accelerating the relevant research. As sleep disorders and other conditions related to sleep quality affect a large part of the population, monitoring sleep at home, over extended periods of time could have significant impact on the quality of life of people who suffer from these conditions. Annotating the sleep architecture of patients, known as sleep stage scoring, is an expensive and time-consuming process that cannot scale to a large number of people. Using wearable EEG and automating sleep stage scoring is a potential solution to this problem. In this thesis, we propose and evaluate two deep learning algorithms for automated sleep stage scoring using a single channel of EEG. In our first method, we use time-frequency analysis for extracting features that closely follow the guidelines that human experts follow, combined with an ensemble of stacked sparse autoencoders as our classification algorithm. In our second method, we propose a convolutional neural network (CNN) architecture for automatically learning filters that are specific to the problem of sleep stage scoring. We achieved state-of-the-art results (mean F1-score 84%; range 82-86%) with our first method and comparably good results with the second (mean F1-score 81%; range 79-83%). Both our methods effectively account for the skewed performance that is usually found in the literature due to sleep stage duration imbalance. We propose a filter analysis and visualisation methodology for CNNs to understand the filters that CNNs learn. Our results indicate that our CNN was able to robustly learn filters that closely follow the sleep scoring guidelines.Open Acces

    Spotless? Perceived Cleanliness in Service Environments

    Get PDF
    This dissertation presents research on customers’ perceptions of cleanliness in service environments. The research contributes to the gap in the literature on cleanliness examined from a customer perspective, and adds to the understanding of environmental cues that influence perceived cleanliness. Part one of the dissertation includes the operationalisation of the concept of perceived cleanliness and the development of an instrument to measure perceived cleanliness. Results showed that perceived cleanliness consists of three dimensions: cleaned, fresh, and uncluttered. Next, the Cleanliness Perceptions Scale (CP-scale) was developed and validated in different service environments, resulting in a 12 item questionnaire that can be used to measure perceived cleanliness in service environments. Part two includes the experimental research on the effects of different environmental cues on perceived cleanliness. It furthermore explores to what extent the effects of these environmental cues on perceived cleanliness can be explained by the concept of priming. The experiments demonstrated that particular environmental cues influence perceived cleanliness: the visible presence of cleaning staff, light colour, light scent, and uncluttered architecture positively influence customers’ perceptions of cleanliness in service environments. Also, empirical support was found for priming as one of the mechanisms involved in the effects. Part three reflects on the implications of the dissertation for theory and practice. The research provides knowledge that is relevant for the fields of facility management, service marketing, social psychology, and environmental psychology. The dissertation improves the understanding of the concept of perceived cleanliness by enabling scholars and practitioners to measure the concept and the effects of particular environmental cues in service environments

    A hybrid algorithm for Bayesian network structure learning with application to multi-label learning

    Get PDF
    We present a novel hybrid algorithm for Bayesian network structure learning, called H2PC. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. The algorithm is based on divide-and-conquer constraint-based subroutines to learn the local structure around a target variable. We conduct two series of experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is currently the most powerful state-of-the-art algorithm for Bayesian network structure learning. First, we use eight well-known Bayesian network benchmarks with various data sizes to assess the quality of the learned structure returned by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in terms of goodness of fit to new data and quality of the network structure with respect to the true dependence structure of the data. Second, we investigate H2PC's ability to solve the multi-label learning problem. We provide theoretical results to characterize and identify graphically the so-called minimal label powersets that appear as irreducible factors in the joint distribution under the faithfulness condition. The multi-label learning problem is then decomposed into a series of multi-class classification problems, where each multi-class variable encodes a label powerset. H2PC is shown to compare favorably to MMHC in terms of global classification accuracy over ten multi-label data sets covering different application domains. Overall, our experiments support the conclusions that local structural learning with H2PC in the form of local neighborhood induction is a theoretically well-motivated and empirically effective learning framework that is well suited to multi-label learning. The source code (in R) of H2PC as well as all data sets used for the empirical tests are publicly available.Comment: arXiv admin note: text overlap with arXiv:1101.5184 by other author

    Subspace Representations and Learning for Visual Recognition

    Get PDF
    Pervasive and affordable sensor and storage technology enables the acquisition of an ever-rising amount of visual data. The ability to extract semantic information by interpreting, indexing and searching visual data is impacting domains such as surveillance, robotics, intelligence, human- computer interaction, navigation, healthcare, and several others. This further stimulates the investigation of automated extraction techniques that are more efficient, and robust against the many sources of noise affecting the already complex visual data, which is carrying the semantic information of interest. We address the problem by designing novel visual data representations, based on learning data subspace decompositions that are invariant against noise, while being informative for the task at hand. We use this guiding principle to tackle several visual recognition problems, including detection and recognition of human interactions from surveillance video, face recognition in unconstrained environments, and domain generalization for object recognition.;By interpreting visual data with a simple additive noise model, we consider the subspaces spanned by the model portion (model subspace) and the noise portion (variation subspace). We observe that decomposing the variation subspace against the model subspace gives rise to the so-called parity subspace. Decomposing the model subspace against the variation subspace instead gives rise to what we name invariant subspace. We extend the use of kernel techniques for the parity subspace. This enables modeling the highly non-linear temporal trajectories describing human behavior, and performing detection and recognition of human interactions. In addition, we introduce supervised low-rank matrix decomposition techniques for learning the invariant subspace for two other tasks. We learn invariant representations for face recognition from grossly corrupted images, and we learn object recognition classifiers that are invariant to the so-called domain bias.;Extensive experiments using the benchmark datasets publicly available for each of the three tasks, show that learning representations based on subspace decompositions invariant to the sources of noise lead to results comparable or better than the state-of-the-art

    Cognitive Robots for Social Interactions

    Get PDF
    One of my goals is to work towards developing Cognitive Robots, especially with regard to improving the functionalities that facilitate the interaction with human beings and their surrounding objects. Any cognitive system designated for serving human beings must be capable of processing the social signals and eventually enable efficient prediction and planning of appropriate responses. My main focus during my PhD study is to bridge the gap between the motoric space and the visual space. The discovery of the mirror neurons ([RC04]) shows that the visual perception of human motion (visual space) is directly associated to the motor control of the human body (motor space). This discovery poses a large number of challenges in different fields such as computer vision, robotics and neuroscience. One of the fundamental challenges is the understanding of the mapping between 2D visual space and 3D motoric control, and further developing building blocks (primitives) of human motion in the visual space as well as in the motor space. First, I present my study on the visual-motoric mapping of human actions. This study aims at mapping human actions in 2D videos to 3D skeletal representation. Second, I present an automatic algorithm to decompose motion capture (MoCap) sequences into synergies along with the times at which they are executed (or "activated") for each joint. Third, I proposed to use the Granger Causality as a tool to study the coordinated actions performed by at least two units. Recent scientific studies suggest that the above "action mirroring circuit" might be tuned to action coordination rather than single action mirroring. Fourth, I present the extraction of key poses in visual space. These key poses facilitate the further study of the "action mirroring circuit". I conclude the dissertation by describing the future of cognitive robotics study
    • …
    corecore