106 research outputs found

    PhysioGait: Context-Aware Physiological Context Modeling for Person Re-identification Attack on Wearable Sensing

    Full text link
    Person re-identification is a critical privacy breach in publicly shared healthcare data. We investigate the possibility of a new type of privacy threat on publicly shared privacy insensitive large scale wearable sensing data. In this paper, we investigate user specific biometric signatures in terms of two contextual biometric traits, physiological (photoplethysmography and electrodermal activity) and physical (accelerometer) contexts. In this regard, we propose PhysioGait, a context-aware physiological signal model that consists of a Multi-Modal Siamese Convolutional Neural Network (mmSNN) which learns the spatial and temporal information individually and performs sensor fusion in a Siamese cost with the objective of predicting a person's identity. We evaluated PhysioGait attack model using 4 real-time collected datasets (3-data under IRB #HP-00064387 and one publicly available data) and two combined datasets achieving 89% - 93% accuracy of re-identifying persons.Comment: Accepted in IEEE MSN 2022. arXiv admin note: substantial text overlap with arXiv:2106.1190

    Intelligent Biosignal Analysis Methods

    Get PDF
    This book describes recent efforts in improving intelligent systems for automatic biosignal analysis. It focuses on machine learning and deep learning methods used for classification of different organism states and disorders based on biomedical signals such as EEG, ECG, HRV, and others

    Fear Classification using Affective Computing with Physiological Information and Smart-Wearables

    Get PDF
    Mención Internacional en el título de doctorAmong the 17 Sustainable Development Goals proposed within the 2030 Agenda and adopted by all of the United Nations member states, the fifth SDG is a call for action to effectively turn gender equality into a fundamental human right and an essential foundation for a better world. It includes the eradication of all types of violence against women. Focusing on the technological perspective, the range of available solutions intended to prevent this social problem is very limited. Moreover, most of the solutions are based on a panic button approach, leaving aside the usage and integration of current state-of-the-art technologies, such as the Internet of Things (IoT), affective computing, cyber-physical systems, and smart-sensors. Thus, the main purpose of this research is to provide new insight into the design and development of tools to prevent and combat Gender-based Violence risky situations and, even, aggressions, from a technological perspective, but without leaving aside the different sociological considerations directly related to the problem. To achieve such an objective, we rely on the application of affective computing from a realist point of view, i.e. targeting the generation of systems and tools capable of being implemented and used nowadays or within an achievable time-frame. This pragmatic vision is channelled through: 1) an exhaustive study of the existing technological tools and mechanisms oriented to the fight Gender-based Violence, 2) the proposal of a new smart-wearable system intended to deal with some of the current technological encountered limitations, 3) a novel fear-related emotion classification approach to disentangle the relation between emotions and physiology, and 4) the definition and release of a new multi-modal dataset for emotion recognition in women. Firstly, different fear classification systems using a reduced set of physiological signals are explored and designed. This is done by employing open datasets together with the combination of time, frequency and non-linear domain techniques. This design process is encompassed by trade-offs between both physiological considerations and embedded capabilities. The latter is of paramount importance due to the edge-computing focus of this research. Two results are highlighted in this first task, the designed fear classification system that employed the DEAP dataset data and achieved an AUC of 81.60% and a Gmean of 81.55% on average for a subjectindependent approach, and only two physiological signals; and the designed fear classification system that employed the MAHNOB dataset data achieving an AUC of 86.00% and a Gmean of 73.78% on average for a subject-independent approach, only three physiological signals, and a Leave-One-Subject-Out configuration. A detailed comparison with other emotion recognition systems proposed in the literature is presented, which proves that the obtained metrics are in line with the state-ofthe- art. Secondly, Bindi is presented. This is an end-to-end autonomous multimodal system leveraging affective IoT throughout auditory and physiological commercial off-theshelf smart-sensors, hierarchical multisensorial fusion, and secured server architecture to combat Gender-based Violence by automatically detecting risky situations based on a multimodal intelligence engine and then triggering a protection protocol. Specifically, this research is focused onto the hardware and software design of one of the two edge-computing devices within Bindi. This is a bracelet integrating three physiological sensors, actuators, power monitoring integrated chips, and a System- On-Chip with wireless capabilities. Within this context, different embedded design space explorations are presented: embedded filtering evaluation, online physiological signal quality assessment, feature extraction, and power consumption analysis. The reported results in all these processes are successfully validated and, for some of them, even compared against physiological standard measurement equipment. Amongst the different obtained results regarding the embedded design and implementation within the bracelet of Bindi, it should be highlighted that its low power consumption provides a battery life to be approximately 40 hours when using a 500 mAh battery. Finally, the particularities of our use case and the scarcity of open multimodal datasets dealing with emotional immersive technology, labelling methodology considering the gender perspective, balanced stimuli distribution regarding the target emotions, and recovery processes based on the physiological signals of the volunteers to quantify and isolate the emotional activation between stimuli, led us to the definition and elaboration of Women and Emotion Multi-modal Affective Computing (WEMAC) dataset. This is a multimodal dataset in which 104 women who never experienced Gender-based Violence that performed different emotion-related stimuli visualisations in a laboratory environment. The previous fear binary classification systems were improved and applied to this novel multimodal dataset. For instance, the proposed multimodal fear recognition system using this dataset reports up to 60.20% and 67.59% for ACC and F1-score, respectively. These values represent a competitive result in comparison with the state-of-the-art that deal with similar multi-modal use cases. In general, this PhD thesis has opened a new research line within the research group under which it has been developed. Moreover, this work has established a solid base from which to expand knowledge and continue research targeting the generation of both mechanisms to help vulnerable groups and socially oriented technology.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: David Atienza Alonso.- Secretaria: Susana Patón Álvarez.- Vocal: Eduardo de la Torre Arnan

    Developing Transferable Deep Models for Mobile Health

    Get PDF
    Human behavior is one of the key facets of health. A major portion of healthcare spending in the US is attributed to chronic diseases, which are linked to behavioral risk factors such as smoking, drinking, unhealthy eating. Mobile devices that are integrated into people's everyday lives make it possible for us to get a closer look into behavior. Two of the most commonly used sensing modalities include Ecological Momentary Assessments (EMAs): surveys about mental states, environment, and other factors, and wearable sensors that are used to capture high frequency contextual and physiological signals. One of the main visions of mobile health (mHealth) is sensor-based behavior modification. Contextual data collected from participants is typically used to train a risk prediction model for adverse events such as smoking, which can then be used to inform intervention design. However, there are several choices in an mHealth study such as the demographics of the participants in the study, the type of sensors used, the questions included in the EMA. This results in two technical challenges to using machine learning models effectively across mHealth studies. The first is the problem of domain shift where the data distribution varies across studies. This would result in models trained on one study to have sub-optimal performance on a different study. Domain shift is common in wearable sensor data since there are several sources of variability such as sensor design, the placement of the sensor on the body, demographics of the users, etc. The second challenge is that of covariate-space shift where the input-space changes across datasets. This is common across EMA datasets since questions can vary based on the study. This thesis studies the problem of covariate-space shift and domain shift in mHealth data. First, I study the problem of domain shift caused by differences in the sensor type and placement in ECG and PPG signals. I propose a self-supervised learning based domain adaptation method that captures the physiological structure of these signals to improve transfer performance of predictive models. Second, I present a method to find a common input representation irrespective of the fine-grained questions in EMA datasets to overcome the problem of covariate-space shift. The next challenge to the deployment of ML models in health is explainability. I explore the problem of bridging the gap between explainability methods and domain experts and present a method to generate plausible, relevant, and convincing explanations.Ph.D

    Deep learning for automated sleep monitoring

    Get PDF
    Wearable electroencephalography (EEG) is a technology that is revolutionising the longitudinal monitoring of neurological and mental disorders, improving the quality of life of patients and accelerating the relevant research. As sleep disorders and other conditions related to sleep quality affect a large part of the population, monitoring sleep at home, over extended periods of time could have significant impact on the quality of life of people who suffer from these conditions. Annotating the sleep architecture of patients, known as sleep stage scoring, is an expensive and time-consuming process that cannot scale to a large number of people. Using wearable EEG and automating sleep stage scoring is a potential solution to this problem. In this thesis, we propose and evaluate two deep learning algorithms for automated sleep stage scoring using a single channel of EEG. In our first method, we use time-frequency analysis for extracting features that closely follow the guidelines that human experts follow, combined with an ensemble of stacked sparse autoencoders as our classification algorithm. In our second method, we propose a convolutional neural network (CNN) architecture for automatically learning filters that are specific to the problem of sleep stage scoring. We achieved state-of-the-art results (mean F1-score 84%; range 82-86%) with our first method and comparably good results with the second (mean F1-score 81%; range 79-83%). Both our methods effectively account for the skewed performance that is usually found in the literature due to sleep stage duration imbalance. We propose a filter analysis and visualisation methodology for CNNs to understand the filters that CNNs learn. Our results indicate that our CNN was able to robustly learn filters that closely follow the sleep scoring guidelines.Open Acces
    corecore