700 research outputs found

    Extraction of Heart Rate from Multimodal Video Streams of Neonates using Methods of Machine Learning

    Get PDF
    The World Health Organization estimates that more than one-tenth of births are premature. Premature births are linked to an increase of the mortality risk, when compared with full-term infants. In fact, preterm birth complications are the leading cause of perinatal mortality. These complications range from respiratory distress to cardiovascular disorders. Vital signs changes are often prior to these major complications, therefore it is crucial to perform continuous monitoring of this signals. Heart rate monitoring is particularly important. Nowadays, the standard method to monitor this vital sign requires adhesive electrodes or sensors that are attached to the infant. This contact-based methods can damage the skin of the infant, possibly leading to infections. Within this context, there is a need to evolve to remote heart rate monitoring methods. This thesis introduces a new method for region of interest selection to improve remote heart rate monitoring in neonatology through Photoplethysmography Imaging. The heart rate assessment is based on the standard photoplethysmography principle, which makes use of the subtle fluctuations of visible or infrared light that is reflected from the skin surface within the cardiac cycle. A camera is used, instead of the contact-based sensors. Specifically, this thesis presents an alternative method to manual region of interest selection using methods of Machine Learning, aiming to improve the robustness of Photoplethysmography Imaging. This method comprises a highly efficient Fully Convolutional Neural Network to select six different body regions, within each video frame. The developed neural network was built upon a ResNet network and a custom upsampling network. Additionally, a new post-processing method was developed to refine the body segmentation results, using a sequence of morphological operations and centre of mass analysis. The developed region of interest selection method was validated with clinical data, demonstrating a good agreement (78%) between the estimated heart rate and the reference

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Continuous sensing and quantification of body motion in infants:A systematic review

    Get PDF
    Abnormal body motion in infants may be associated with neurodevelopmental delay or critical illness. In contrast to continuous patient monitoring of the basic vitals, the body motion of infants is only determined by discrete periodic clinical observations of caregivers, leaving the infants unattended for observation for a longer time. One step to fill this gap is to introduce and compare different sensing technologies that are suitable for continuous infant body motion quantification. Therefore, we conducted this systematic review for infant body motion quantification based on the PRISMA method (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). In this systematic review, we introduce and compare several sensing technologies with motion quantification in different clinical applications. We discuss the pros and cons of each sensing technology for motion quantification. Additionally, we highlight the clinical value and prospects of infant motion monitoring. Finally, we provide suggestions with specific needs in clinical practice, which can be referred by clinical users for their implementation. Our findings suggest that motion quantification can improve the performance of vital sign monitoring, and can provide clinical value to the diagnosis of complications in infants.</p

    Continuous Camera-Based Premature-Infant Monitoring Algorithms for NICU

    Get PDF
    Non-contact visual monitoring of vital signs in neonatology has been demonstrated by several recent studies in ideal scenarios where the baby is calm and there is no medical or parental intervention. Similar to contact monitoring methods (e.g., ECG, pulse oximeter) the camera-based solutions suffer from motion artifacts. Therefore, during care and the infants’ active periods, calculated values typically differ largely from the real ones. In this way, our main contribution to existing remote camera-based techniques is to detect and classify such situations with a high level of confidence. Our algorithms can not only evaluate quiet periods, but can also provide continuous monitoring. Altogether, our proposed algorithms can measure pulse rate, breathing rate, and to recognize situations such as medical intervention or very active subjects using only a single camera, while the system does not exceed the computational capabilities of average CPU-GPU-based hardware. The performance of the algorithms was evaluated on our database collected at the Ist Dept. of Neonatology of Pediatrics, Dept of Obstetrics and Gynecology, Semmelweis University, Budapest, Hungary

    AI-enabled remote monitoring of vital signs for COVID-19: methods, prospects and challenges

    Get PDF
    The COVID-19 pandemic has overwhelmed the existing healthcare infrastructure in many parts of the world. Healthcare professionals are not only over-burdened but also at a high risk of nosocomial transmission from COVID-19 patients. Screening and monitoring the health of a large number of susceptible or infected individuals is a challenging task. Although professional medical attention and hospitalization are necessary for high-risk COVID-19 patients, home isolation is an effective strategy for low and medium risk patients as well as for those who are at risk of infection and have been quarantined. However, this necessitates effective techniques for remotely monitoring the patients’ symptoms. Recent advances in Machine Learning (ML) and Deep Learning (DL) have strengthened the power of imaging techniques and can be used to remotely perform several tasks that previously required the physical presence of a medical professional. In this work, we study the prospects of vital signs monitoring for COVID-19 infected as well as quarantined individuals by using DL and image/signal-processing techniques, many of which can be deployed using simple cameras and sensors available on a smartphone or a personal computer, without the need of specialized equipment. We demonstrate the potential of ML-enabled workflows for several vital signs such as heart and respiratory rates, cough, blood pressure, and oxygen saturation. We also discuss the challenges involved in implementing ML-enabled techniques

    Video-based infant discomfort detection

    Get PDF

    Non-contact measures to monitor hand movement of people with rheumatoid arthritis using a monocular RGB camera

    Get PDF
    Hand movements play an essential role in a person’s ability to interact with the environment. In hand biomechanics, the range of joint motion is a crucial metric to quantify changes due to degenerative pathologies, such as rheumatoid arthritis (RA). RA is a chronic condition where the immune system mistakenly attacks the joints, particularly those in the hands. Optoelectronic motion capture systems are gold-standard tools to quantify changes but are challenging to adopt outside laboratory settings. Deep learning executed on standard video data can capture RA participants in their natural environments, potentially supporting objectivity in remote consultation. The three main research aims in this thesis were 1) to assess the extent to which current deep learning architectures, which have been validated for quantifying motion of other body segments, can be applied to hand kinematics using monocular RGB cameras, 2) to localise where in videos the hand motions of interest are to be found, 3) to assess the validity of 1) and 2) to determine disease status in RA. First, hand kinematics for twelve healthy participants, captured with OpenPose were benchmarked against those captured using an optoelectronic system, showing acceptable instrument errors below 10°. Then, a gesture classifier was tested to segment video recordings of twenty-two healthy participants, achieving an accuracy of 93.5%. Finally, OpenPose and the classifier were applied to videos of RA participants performing hand exercises to determine disease status. The inferred disease activity exhibited agreement with the in-person ground truth in nine out of ten instances, outperforming virtual consultations, which agreed only six times out of ten. These results demonstrate that this approach is more effective than estimated disease activity performed by human experts during video consultations. The end goal sets the foundation for a tool that RA participants can use to observe their disease activity from their home.Open Acces
    • 

    corecore