256 research outputs found

    Radar signal processing for sensing in assisted living: the challenges associated with real-time implementation of emerging algorithms

    Get PDF
    This article covers radar signal processing for sensing in the context of assisted living (AL). This is presented through three example applications: human activity recognition (HAR) for activities of daily living (ADL), respiratory disorders, and sleep stages (SSs) classification. The common challenge of classification is discussed within a framework of measurements/preprocessing, feature extraction, and classification algorithms for supervised learning. Then, the specific challenges of the three applications from a signal processing standpoint are detailed in their specific data processing and ad hoc classification strategies. Here, the focus is on recent trends in the field of activity recognition (multidomain, multimodal, and fusion), health-care applications based on vital signs (superresolution techniques), and comments related to outstanding challenges. Finally, this article explores challenges associated with the real-time implementation of signal processing/classification algorithms

    Multispectral Video Fusion for Non-contact Monitoring of Respiratory Rate and Apnea

    Full text link
    Continuous monitoring of respiratory activity is desirable in many clinical applications to detect respiratory events. Non-contact monitoring of respiration can be achieved with near- and far-infrared spectrum cameras. However, current technologies are not sufficiently robust to be used in clinical applications. For example, they fail to estimate an accurate respiratory rate (RR) during apnea. We present a novel algorithm based on multispectral data fusion that aims at estimating RR also during apnea. The algorithm independently addresses the RR estimation and apnea detection tasks. Respiratory information is extracted from multiple sources and fed into an RR estimator and an apnea detector whose results are fused into a final respiratory activity estimation. We evaluated the system retrospectively using data from 30 healthy adults who performed diverse controlled breathing tasks while lying supine in a dark room and reproduced central and obstructive apneic events. Combining multiple respiratory information from multispectral cameras improved the root mean square error (RMSE) accuracy of the RR estimation from up to 4.64 monospectral data down to 1.60 breaths/min. The median F1 scores for classifying obstructive (0.75 to 0.86) and central apnea (0.75 to 0.93) also improved. Furthermore, the independent consideration of apnea detection led to a more robust system (RMSE of 4.44 vs. 7.96 breaths/min). Our findings may represent a step towards the use of cameras for vital sign monitoring in medical applications

    Video Respiration Monitoring:Towards Remote Apnea Detection in the Clinic

    Get PDF

    Video Respiration Monitoring:Towards Remote Apnea Detection in the Clinic

    Get PDF

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    An inclusive survey of contactless wireless sensing: a technology used for remotely monitoring vital signs has the potential to combating COVID-19

    Get PDF
    With the Coronavirus pandemic showing no signs of abating, companies and governments around the world are spending millions of dollars to develop contactless sensor technologies that minimize the need for physical interactions between the patient and healthcare providers. As a result, healthcare research studies are rapidly progressing towards discovering innovative contactless technologies, especially for infants and elderly people who are suffering from chronic diseases that require continuous, real-time control, and monitoring. The fusion between sensing technology and wireless communication has emerged as a strong research candidate choice because wearing sensor devices is not desirable by patients as they cause anxiety and discomfort. Furthermore, physical contact exacerbates the spread of contagious diseases which may lead to catastrophic consequences. For this reason, research has gone towards sensor-less or contactless technology, through sending wireless signals, then analyzing and processing the reflected signals using special techniques such as frequency modulated continuous wave (FMCW) or channel state information (CSI). Therefore, it becomes easy to monitor and measure the subject’s vital signs remotely without physical contact or asking them to wear sensor devices. In this paper, we overview and explore state-of-the-art research in the field of contactless sensor technology in medicine, where we explain, summarize, and classify a plethora of contactless sensor technologies and techniques with the highest impact on contactless healthcare. Moreover, we overview the enabling hardware technologies as well as discuss the main challenges faced by these systems.This work is funded by the scientific and technological research council of Turkey (TÜBITAK) under grand 119E39

    Doppler Radar Techniques for Distinct Respiratory Pattern Recognition and Subject Identification.

    Get PDF
    Ph.D. Thesis. University of Hawaiʻi at Mānoa 2017

    Contact and remote breathing rate monitoring techniques: a review

    Get PDF
    ABSTRACT: Breathing rate monitoring is a must for hospitalized patients with the current coronavirus disease 2019 (COVID-19). We review in this paper recent implementations of breathing monitoring techniques, where both contact and remote approaches are presented. It is known that with non-contact monitoring, the patient is not tied to an instrument, which improves patients’ comfort and enhances the accuracy of extracted breathing activity, since the distress generated by a contact device is avoided. Remote breathing monitoring allows screening people infected with COVID-19 by detecting abnormal respiratory patterns. However, non-contact methods show some disadvantages such as the higher set-up complexity compared to contact ones. On the other hand, many reported contact methods are mainly implemented using discrete components. While, numerous integrated solutions have been reported for non-contact techniques, such as continuous wave (CW) Doppler radar and ultrawideband (UWB) pulsed radar. These radar chips are discussed and their measured performances are summarized and compared

    Non-contact video-based assessment of the respiratory function using a RGB-D camera

    Get PDF
    A fully automatic, non-contact method for the assessment of the respiratory function is proposed using an RGB-D camera-based technology. The proposed algorithm relies on the depth channel of the camera to estimate the movements of the body’s trunk during breathing. It solves in fixed-time complexity, O(1), as the acquisition relies on the mean depth value of the target regions only using the color channels to automatically locate them. This simplicity allows the extraction of real-time values of the respiration, as well as the synchronous assessment on multiple body parts. Two different experiments have been performed: a first one conducted on 10 users in a single region and with a fixed breathing frequency, and a second one conducted on 20 users considering a simultaneous acquisition in two regions. The breath rate has then been computed and compared with a reference measurement. The results show a non-statistically significant bias of 0.11 breaths/min and 96% limits of agreement of -2.21/2.34 breaths/min regarding the breath-by-breath assessment. The overall real-time assessment shows a RMSE of 0.21 breaths/min. We have shown that this method is suitable for applications where respiration needs to be monitored in non-ambulatory and static environments.This research was funded by Ministerio de Ciencia e Innovación with grant number PID2020-116011.Postprint (published version
    corecore