3,245 research outputs found
Wearable Biosensor: How to improve the efficacy in data transmission in respiratory monitoring system?
Respiratory rate measurement is important under different types of health issues. The need for technological developments for measuring respiratory rate has become imperative for healthcare professionals. The paper presents an approach to respiratory monitoring, with the aim to improve the accuracy and efficacy of the data monitored. We use multiple types of sensors on various locations on the body to continuously transmit real-time data, which is rocessed to calculate the respiration rate. Variations in the respiration rate will help us identify the current health condition of the patient also for diagnosis and further medical treatment. The software tools such as Keil μVision IDE, Mbed Studio IDE, Energia IDE are used to compile and build the system architecture and display information. EasyEDA is used to provide pin map details and complete architecture information
A systematic review of physiological signals based driver drowsiness detection systems.
Driving a vehicle is a complex, multidimensional, and potentially risky activity demanding full mobilization and utilization of physiological and cognitive abilities. Drowsiness, often caused by stress, fatigue, and illness declines cognitive capabilities that affect drivers' capability and cause many accidents. Drowsiness-related road accidents are associated with trauma, physical injuries, and fatalities, and often accompany economic loss. Drowsy-related crashes are most common in young people and night shift workers. Real-time and accurate driver drowsiness detection is necessary to bring down the drowsy driving accident rate. Many researchers endeavored for systems to detect drowsiness using different features related to vehicles, and drivers' behavior, as well as, physiological measures. Keeping in view the rising trend in the use of physiological measures, this study presents a comprehensive and systematic review of the recent techniques to detect driver drowsiness using physiological signals. Different sensors augmented with machine learning are utilized which subsequently yield better results. These techniques are analyzed with respect to several aspects such as data collection sensor, environment consideration like controlled or dynamic, experimental set up like real traffic or driving simulators, etc. Similarly, by investigating the type of sensors involved in experiments, this study discusses the advantages and disadvantages of existing studies and points out the research gaps. Perceptions and conceptions are made to provide future research directions for drowsiness detection techniques based on physiological signals. [Abstract copyright: © The Author(s), under exclusive licence to Springer Nature B.V. 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Obtrusiveness of smartphone applications for sleep health
Unobtrusiveness is one of the main issues concerning health-related systems. Many developers affirm that their systems do not burden users; however, this is not always achieved. This article evaluates the obtrusiveness of various systems developed to improve sleep quality. The systems analyzed are related to sleep hygiene, since it has become an interesting topic for researchers, physicians and people in general, mainly because it has become part of the methods used to estimate a persons’ health status A set of design elements are presented as keys to achieving unobtrusiveness. We propose a scale to measure the level of unobtrusiveness and use it to evaluate several systems, with a focus on smartphone applications.
An Empirical Study Comparing Unobtrusive Physiological Sensors for Stress Detection in Computer Work.
Several unobtrusive sensors have been tested in studies to capture physiological reactions to stress in workplace settings. Lab studies tend to focus on assessing sensors during a specific computer task, while in situ studies tend to offer a generalized view of sensors' efficacy for workplace stress monitoring, without discriminating different tasks. Given the variation in workplace computer activities, this study investigates the efficacy of unobtrusive sensors for stress measurement across a variety of tasks. We present a comparison of five physiological measurements obtained in a lab experiment, where participants completed six different computer tasks, while we measured their stress levels using a chest-band (ECG, respiration), a wristband (PPG and EDA), and an emerging thermal imaging method (perinasal perspiration). We found that thermal imaging can detect increased stress for most participants across all tasks, while wrist and chest sensors were less generalizable across tasks and participants. We summarize the costs and benefits of each sensor stream, and show how some computer use scenarios present usability and reliability challenges for stress monitoring with certain physiological sensors. We provide recommendations for researchers and system builders for measuring stress with physiological sensors during workplace computer use
State of the art of audio- and video based solutions for AAL
Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach.
This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users.
The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted.
The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
Recommended from our members
Digital phenotyping through multimodal, unobtrusive sensing
The growing adoption of multimodal wearable and mobile devices, such as smartphones and wrist-worn watches has generated an increase in the collection of physiological and behavioural data at scale. This digital phenotyping data enables researchers to make inferences regarding users’ physical and mental health at scale, for the first time. However, translating this data into actionable insights requires computational approaches that turn unlabelled, multimodal time-series sensor data into validated measures that can be interpreted at scale.
This thesis describes the derivation of novel computational methods that leverage digital phenotyping data from wearable devices in large-scale populations to infer physical behaviours. These methods combine insights from signal processing, data mining and machine learning alongside domain knowledge in physical activity and sleep epidemiology. First, the inference of sleeping windows in free-living conditions through a heart rate sensing approach is explored. This algorithm is particularly valuable in the absence of ground truth or sleep diaries given its simplicity, adaptability and capacity for personalization. I then explore multistage sleep classification through combined movement and cardiac wearable sensing and machine learning. Further, I demonstrate that postural changes detected through wrist accelerometers can inform habitual behaviours and are valuable complements to traditional, intensity-based physical activity metrics. I then leverage the concomitant responses of heart rate to physical activity that can be captured through multimodal wearable sensors through a self-supervised training task. The resulting embeddings from this task are shown to be useful for the downstream classification of demographic factors, BMI, energy expenditure and cardiorespiratory fitness. Finally, I describe a deep learning model for the adaptive inference of cardiorespiratory fitness (VO2max) using wearable data in free living conditions. I demonstrate the robustness of the model in a large UK population and show the models’ adaptability by evaluating its performance in a subset of the population with repeated measures ~6 years after the original recordings.
Together, this work increases the potential of multimodal wearable and mobile sensors for physical activity and behavioural inferences in population studies. In particular, this thesis showcases the potential of using wearable devices to make valuable physical activity, sleep and fitness inferences in large cohort studies. Given the nature of the data collected and the fact that most of this data is currently generated by commercial providers and not research institutes, laying the foundations for responsible data governance and ethical use of these technologies will be critical to building trust and enabling the development of the field of digital phenotyping.I was funded by GlaxoSmithKline and the Engineering and Physical Sciences Research Council. I was also supported by the Alan Turing Institute through their Enrichment Scheme
Multimodal Signal Processing for Diagnosis of Cardiorespiratory Disorders
This thesis addresses the use of multimodal signal processing to develop algorithms for the automated processing of two cardiorespiratory disorders. The aim of the first application of this thesis was to reduce false alarm rate in an intensive care unit. The goal was to detect five critical arrhythmias using processing of multimodal signals including photoplethysmography, arterial blood pressure, Lead II and augmented right arm electrocardiogram (ECG). A hierarchical approach was used to process the signals as well as a custom signal processing technique for each arrhythmia type. Sleep disorders are a prevalent health issue, currently costly and inconvenient to diagnose, as they normally require an overnight hospital stay by the patient. In the second application of this project, we designed automated signal processing algorithms for the diagnosis of sleep apnoea with a main focus on the ECG signal processing. We estimated the ECG-derived respiratory (EDR) signal using different methods: QRS-complex area, principal component analysis (PCA) and kernel PCA. We proposed two algorithms (segmented PCA and approximated PCA) for EDR estimation to enable applying the PCA method to overnight recordings and rectify the computational issues and memory requirement. We compared the EDR information against the chest respiratory effort signals. The performance was evaluated using three automated machine learning algorithms of linear discriminant analysis (LDA), extreme learning machine (ELM) and support vector machine (SVM) on two databases: the MIT PhysioNet database and the St. Vincent’s database. The results showed that the QRS area method for EDR estimation combined with the LDA classifier was the highest performing method and the EDR signals contain respiratory information useful for discriminating sleep apnoea. As a final step, heart rate variability (HRV) and cardiopulmonary coupling (CPC) features were extracted and combined with the EDR features and temporal optimisation techniques were applied. The cross-validation results of the minute-by-minute apnoea classification achieved an accuracy of 89%, a sensitivity of 90%, a specificity of 88%, and an AUC of 0.95 which is comparable to the best results reported in the literature
Machine Learning Models for Mental Stress Classification based on Multimodal Biosignal Input
Mental stress is a largely prevalent condition directly or indirectly responsible for
almost half of all work-related diseases. Work-Related Stress is the second most impactful
occupational health problem in Europe, behind musculoskeletal diseases. When mental
health is adequately handled, a worker’s well-being, performance, and productivity can
be considerably improved.
This thesis presents machine learning models to classify mental stress experienced by
computer users using physiological signals including heart rate, acquired using a smart-
watch; respiration, derived from a smartphone’s acc placed on the chest; and trapezius
electromyography, using proprietary electromyography sensors. Two interactive proto-
cols were implemented to collect data from 12 individuals. Time and frequency domain
features were extracted from the heart rate and electromyography signals, and statistical
and temporal features were extracted from the derived respiration signal.
Three algorithms: Support Vector Machine, Random Forest, and K-Nearest-Neighbor
were employed for mental stress classification. Different input modalities were tested
for the machine learning models: one for each physiological signal and a multimodal
one, combining all of them. Random Forest obtained the best mean accuracy (98.5%) for
the respiration model whereas K-Nearest-Neighbor attained higher mean accuracies for
the heart rate (89.0%) left, right and total electromyography (98.9%, 99.2%, and 99.3%)
models. KNN algorithm was also able to achieve 100% mean accuracy for the multimodal
model. A possible future approach would be to validate these models in real-time.O stress mental é uma condição amplamente prevalente direta ou indiretamente
responsável por quase metade de todas doenças relacionadas com trabalho. O stress expe-
rienciado no trabalho é o segundo problema de saúde ocupacional com maior impacto na
Europa, depois das doenças músculo-esqueléticas. Quando a saúde mental é adequada-
mente cuidada, o bem-estar, o desempenho e a produtividade de um trabalhador podem
ser consideravelmente melhorados.
Esta tese apresenta modelos de aprendizagem automática que classificam o stress
mental experienciado por utilizadores de computadores recorrendo a sinais fisiológi-
cos, incluindo a frequência cardíaca, adquirida pelo sensor de fotopletismografia de um
smartwatch; a respiração, derivada de um acelerómetro incorporado no smartphone po-
sicionado no peito; e electromiografia de cada um dos músculos trapézios, utilizando
sensores electromiográficos proprietários. Foram implementados dois protocolos inte-
ractivos para recolha de dados de 12 indivíduos. Características do domínio temporal
e de frequência foram extraídas dos sinais de frequência cardíaca e electromiografia, e
características estatísticas e temporais foram extraídas do sinal respiratório.
Três algoritmos entitulados K-Nearest-Neighbor, Random Forest, e Support Vector
Machine foram utilizados para a classificação do stress mental. Foram testadas diferentes
modalidades de dados para os modelos de aprendizagem automática: uma para cada sinal
fisiológico e uma multimodal, combinando os três. O Random Forest obteve a melhor
precisão média (98,5%) para o modelo de respiração enquanto que o K-Nearest-Neighbor
atingiu uma maior precisão média nos modelos de frequência cardíaca (89,0%) e electro-
miografia esquerda, direita e total (98,9%, 99,2%, e 99,3%). O algoritmo KNN conseguiu
ainda atingir uma precisão média de 100% para o modelo multimodal. Uma possível
abordagem futura seria efetuar uma validação destes modelos em tempo real
- …