8 research outputs found

    Characterization and processing of novel neck photoplethysmography signals for cardiorespiratory monitoring

    Get PDF
    Epilepsy is a neurological disorder causing serious brain seizures that severely affect the patients' quality of life. Sudden unexpected death in epilepsy (SUDEP), for which no evident decease reason is found after post-mortem examination, is a common cause of mortality. The mechanisms leading to SUDEP are uncertain, but, centrally mediated apneic respiratory dysfunction, inducing dangerous hypoxemia, plays a key role. Continuous physiological monitoring appears as the only reliable solution for SUDEP prevention. However, current seizure-detection systems do not show enough sensitivity and present a high number of intolerable false alarms. A wearable system capable of measuring several physiological signals from the same body location, could efficiently overcome these limitations. In this framework, a neck wearable apnea detection device (WADD), sensing airflow through tracheal sounds, was designed. Despite the promising performance, it is still necessary to integrate an oximeter sensor into the system, to measure oxygen saturation in blood (SpO2) from neck photoplethysmography (PPG) signals, and hence, support the apnea detection decision. The neck is a novel PPG measurement site that has not yet been thoroughly explored, due to numerous challenges. This research work aims to characterize neck PPG signals, in order to fully exploit this alternative pulse oximetry location, for precise cardiorespiratory biomarkers monitoring. In this thesis, neck PPG signals were recorded, for the first time in literature, in a series of experiments under different artifacts and respiratory conditions. Morphological and spectral characteristics were analyzed in order to identify potential singularities of the signals. The most common neck PPG artifacts critically corrupting the signal quality, and other breathing states of interest, were thoroughly characterized in terms of the most discriminative features. An algorithm was further developed to differentiate artifacts from clean PPG signals. Both, the proposed characterization and classification model can be useful tools for researchers to denoise neck PPG signals and exploit them in a variety of clinical contexts. In addition to that, it was demonstrated that the neck also offered the possibility, unlike other body parts, to extract the Jugular Venous Pulse (JVP) non-invasively. Overall, the thesis showed how the neck could be an optimum location for multi-modal monitoring in the context of diseases affecting respiration, since it not only allows the sensing of airflow related signals, but also, the breathing frequency component of the PPG appeared more prominent than in the standard finger location. In this context, this property enabled the extraction of relevant features to develop a promising algorithm for apnea detection in near-real time. These findings could be of great importance for SUDEP prevention, facilitating the investigation of the mechanisms and risk factors associated to it, and ultimately reduce epilepsy mortality.Open Acces

    Advanced Signal Processing in Wearable Sensors for Health Monitoring

    Get PDF
    Smart, wearables devices on a miniature scale are becoming increasingly widely available, typically in the form of smart watches and other connected devices. Consequently, devices to assist in measurements such as electroencephalography (EEG), electrocardiogram (ECG), electromyography (EMG), blood pressure (BP), photoplethysmography (PPG), heart rhythm, respiration rate, apnoea, and motion detection are becoming more available, and play a significant role in healthcare monitoring. The industry is placing great emphasis on making these devices and technologies available on smart devices such as phones and watches. Such measurements are clinically and scientifically useful for real-time monitoring, long-term care, and diagnosis and therapeutic techniques. However, a pertaining issue is that recorded data are usually noisy, contain many artefacts, and are affected by external factors such as movements and physical conditions. In order to obtain accurate and meaningful indicators, the signal has to be processed and conditioned such that the measurements are accurate and free from noise and disturbances. In this context, many researchers have utilized recent technological advances in wearable sensors and signal processing to develop smart and accurate wearable devices for clinical applications. The processing and analysis of physiological signals is a key issue for these smart wearable devices. Consequently, ongoing work in this field of study includes research on filtration, quality checking, signal transformation and decomposition, feature extraction and, most recently, machine learning-based methods

    Signal Processing Using Non-invasive Physiological Sensors

    Get PDF
    Non-invasive biomedical sensors for monitoring physiological parameters from the human body for potential future therapies and healthcare solutions. Today, a critical factor in providing a cost-effective healthcare system is improving patients' quality of life and mobility, which can be achieved by developing non-invasive sensor systems, which can then be deployed in point of care, used at home or integrated into wearable devices for long-term data collection. Another factor that plays an integral part in a cost-effective healthcare system is the signal processing of the data recorded with non-invasive biomedical sensors. In this book, we aimed to attract researchers who are interested in the application of signal processing methods to different biomedical signals, such as an electroencephalogram (EEG), electromyogram (EMG), functional near-infrared spectroscopy (fNIRS), electrocardiogram (ECG), galvanic skin response, pulse oximetry, photoplethysmogram (PPG), etc. We encouraged new signal processing methods or the use of existing signal processing methods for its novel application in physiological signals to help healthcare providers make better decisions

    IberSPEECH 2020: XI Jornadas en TecnologĂ­a del Habla and VII Iberian SLTech

    Get PDF
    IberSPEECH2020 is a two-day event, bringing together the best researchers and practitioners in speech and language technologies in Iberian languages to promote interaction and discussion. The organizing committee has planned a wide variety of scientific and social activities, including technical paper presentations, keynote lectures, presentation of projects, laboratories activities, recent PhD thesis, discussion panels, a round table, and awards to the best thesis and papers. The program of IberSPEECH2020 includes a total of 32 contributions that will be presented distributed among 5 oral sessions, a PhD session, and a projects session. To ensure the quality of all the contributions, each submitted paper was reviewed by three members of the scientific review committee. All the papers in the conference will be accessible through the International Speech Communication Association (ISCA) Online Archive. Paper selection was based on the scores and comments provided by the scientific review committee, which includes 73 researchers from different institutions (mainly from Spain and Portugal, but also from France, Germany, Brazil, Iran, Greece, Hungary, Czech Republic, Ucrania, Slovenia). Furthermore, it is confirmed to publish an extension of selected papers as a special issue of the Journal of Applied Sciences, “IberSPEECH 2020: Speech and Language Technologies for Iberian Languages”, published by MDPI with fully open access. In addition to regular paper sessions, the IberSPEECH2020 scientific program features the following activities: the ALBAYZIN evaluation challenge session.Red Española de Tecnologías del Habla. Universidad de Valladoli

    Affective state recognition in Virtual Reality from electromyography and photoplethysmography using head-mounted wearable sensors.

    Get PDF
    The three core components of Affective Computing (AC) are emotion expression recognition, emotion processing, and emotional feedback. Affective states are typically characterized in a two-dimensional space consisting of arousal, i.e., the intensity of the emotion felt; and valence, i.e., the degree to which the current emotion is pleasant or unpleasant. These fundamental properties of emotion can not only be measured using subjective ratings from users, but also with the help of physiological and behavioural measures, which potentially provide an objective evaluation across users. Multiple combinations of measures are utilised in AC for a range of applications, including education, healthcare, marketing, and entertainment. As the uses of immersive Virtual Reality (VR) technologies are growing, there is a rapidly increasing need for robust affect recognition in VR settings. However, the integration of affect detection methodologies with VR remains an unmet challenge due to constraints posed by the current VR technologies, such as Head Mounted Displays. This EngD project is designed to overcome some of the challenges by effectively integrating valence and arousal recognition methods in VR technologies and by testing their reliability in seated and room-scale full immersive VR conditions. The aim of this EngD research project is to identify how affective states are elicited in VR and how they can be efficiently measured, without constraining the movement and decreasing the sense of presence in the virtual world. Through a three-years long collaboration with Emteq labs Ltd, a wearable technology company, we assisted in the development of a novel multimodal affect detection system, specifically tailored towards the requirements of VR. This thesis will describe the architecture of the system, the research studies that enabled this development, and the future challenges. The studies conducted, validated the reliability of our proposed system, including the VR stimuli design, data measures and processing pipeline. This work could inform future studies in the field of AC in VR and assist in the development of novel applications and healthcare interventions
    corecore