714 research outputs found

    Distributed Computing and Monitoring Technologies for Older Patients

    Get PDF
    This book summarizes various approaches for the automatic detection of health threats to older patients at home living alone. The text begins by briefly describing those who would most benefit from healthcare supervision. The book then summarizes possible scenarios for monitoring an older patient at home, deriving the common functional requirements for monitoring technology. Next, the work identifies the state of the art of technological monitoring approaches that are practically applicable to geriatric patients. A survey is presented on a range of such interdisciplinary fields as smart homes, telemonitoring, ambient intelligence, ambient assisted living, gerontechnology, and aging-in-place technology. The book discusses relevant experimental studies, highlighting the application of sensor fusion, signal processing and machine learning techniques. Finally, the text discusses future challenges, offering a number of suggestions for further research directions

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Small business innovation research. Abstracts of 1988 phase 1 awards

    Get PDF
    Non-proprietary proposal abstracts of Phase 1 Small Business Innovation Research (SBIR) projects supported by NASA are presented. Projects in the fields of aeronautical propulsion, aerodynamics, acoustics, aircraft systems, materials and structures, teleoperators and robots, computer sciences, information systems, data processing, spacecraft propulsion, bioastronautics, satellite communication, and space processing are covered

    Development of a prototype sensor-integrated urine bag for real-time measuring.

    Get PDF
    The urine output is a rapid bedside test for kidney function, and reduced output is the common biomarker for an acute kidney injury (AKI). The consensus definition of the symptom is used urine output <0.5 ml/kg/hour for ≥6 hours to define AKI. If a patient is suspected to have this problem, the urine output monitoring needs to be done hourly, and this task consumes a lot of time, and easily affected by human errors. Moreover, available evidences in literatures indicate that more frequent patient monitoring could impact clinical decision making and patient’s outcome. However, it is not possible for nurses to dedicate their precious time manually up to minute manually measurements. To date, there is no reliable device has been used in the clinical routine. From the literatures, only a few automated devices were found with the ability to automatically monitor urine outputs, and could reduce nurse workload and at the same time enhance work performance, but these still have some limitations to measure human urine. In this thesis presents the development and testing for such a device. The research was aimed at building a prototype that could be measured a small amount of urine output, and transit information via wireless to a Cloud database with inexpensive and less complex components. The concept is to provide a real-time measurement and generates data records in Cloud database without requiring any intervention by the nurse. The initial experiment was done measure small amount of liquid using a dropvolume calculation technique. An optical sensor was placed in a medical dropper to record number of counted-drops, the Mean Absolute Percent Error from the test is reported ±3.96% for measuring 35 ml of liquid compared with the ISO standard. The second prototype was developed with multi-sensors, including photo interrupter sensor, infrared proximity sensor, and ultrasonic sensor, to detect the dripping and urine flow. However, the optical sensor still provided the most accuracy of all. The final prototype is based on the combination of optical sensor for detecting drops to calculated urine flow rate and its volume, and weight scales to measurement the weight of collected urine in a commercial urine meter. The prototype also provides an alert in two scenarios; when the urine production is not met the goals, and when the urine container is almost full, the system will automatically generate alarms that warn the nurse. Series of experimentation tests have been conducted under consultant of medical professional to verify the proper operation and accuracy in the measurement. The results are improved from the previous prototype. The mean error found of this version is 1.975% or ≈ ±1.215 ml. when measure 35ml of urine under the average density value of urine (1.020). These tests confirm the potential application of the device by assisting nurse to monitor urine output with the accuracy in the measurement. The use of the Cloud based technology has not been previously reported in the literature as far as can be ascertained. These results illustrated the capability, suitability and limitation of the chosen technology

    Detection and prediction of falls among elderly people using walkers

    Get PDF
    Falls of elderly people are big health burden, especially for long-term consequence. Yet we already have research, describing how exactly elderly fall and reasons of falls. We aimed to develop means that could not only detect falls and send alerts to relatives and doctors to conquer one of the biggest fears of elderly to fall and do not have the ability to call for help, but also tried to implement fall prevention system. This system based on “relatively safe walking patterns” that our system tries to detect during the walk. During the work we used SensorTag 2.0 CC2650 sensors, iPhone and Apple Watch to collect motion data (Gyroscope, Accelerometer and Magnetometer) and compared the accuracy of each device. As we chosen iPhone and Apple Watch to use Core ML framework to integrate the neural network model we generated using Keras into prototype app. The iPhone app perfectly detects falls, but it needs to collect data more accurately, to improve the machine learning model to improve the work of prediction falls. The Apple Watch app does not work acceptable, despite well prepared Keras model and requires revision

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed

    SUSTAINABLE AND MOBILITY TECHNOLOGIES FOR ASSISTIVE HEALTHCARE AND MONITORING

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    The 10th Jubilee Conference of PhD Students in Computer Science

    Get PDF

    Widefield Computational Biophotonic Imaging for Spatiotemporal Cardiovascular Hemodynamic Monitoring

    Get PDF
    Cardiovascular disease is the leading cause of mortality, resulting in 17.3 million deaths per year globally. Although cardiovascular disease accounts for approximately 30% of deaths in the United States, many deleterious events can be mitigated or prevented if detected and treated early. Indeed, early intervention and healthier behaviour adoption can reduce the relative risk of first heart attacks by up to 80% compared to those who do not adopt new healthy behaviours. Cardiovascular monitoring is a vital component of disease detection, mitigation, and treatment. The cardiovascular system is an incredibly dynamic system that constantly adapts to internal and external stimuli. Monitoring cardiovascular function and response is vital for disease detection and monitoring. Biophotonic technologies provide unique solutions for cardiovascular assessment and monitoring in naturalistic and clinical settings. These technologies leverage the properties of light as it enters and interacts with the tissue, providing safe and rapid sensing that can be performed in many different environments. Light entering into human tissue undergoes a complex series of absorption and scattering events according to both the illumination and tissue properties. The field of quantitative biomedical optics seeks to quantify physiological processes by analysing the remitted light characteristics relative to the controlled illumination source. Drawing inspiration from contact-based biophotonic sensing technologies such as pulse oximetry and near infrared spectroscopy, we explored the feasibility of widefield hemodynamic assessment using computational biophotonic imaging. Specifically, we investigated the hypothesis that computational biophotonic imaging can assess spatial and temporal properties of pulsatile blood flow across large tissue regions. This thesis presents the design, development, and evaluation of a novel photoplethysmographic imaging system for assessing spatial and temporal hemodynamics in major pulsatile vasculature through the sensing and processing of subtle light intensity fluctuations arising from local changes in blood volume. This system co-integrates methods from biomedical optics, electronic control, and biomedical image and signal processing to enable non-contact widefield hemodynamic assessment over large tissue regions. A biophotonic optical model was developed to quantitatively assess transient blood volume changes in a manner that does not require a priori information about the tissue's absorption and scattering characteristics. A novel automatic blood pulse waveform extraction method was developed to encourage passive monitoring. This spectral-spatial pixel fusion method uses physiological hemodynamic priors to guide a probabilistic framework for learning pixel weights across the scene. Pixels are combined according to their signal weight, resulting in a single waveform. Widefield hemodynamic imaging was assessed in three biomedical applications using the aforementioned developed system. First, spatial vascular distribution was investigated across a sample with highly varying demographics for assessing common pulsatile vascular pathways. Second, non-contact biophotonic assessment of the jugular venous pulse waveform was assessed, demonstrating clinically important information about cardiac contractility function in a manner which is currently assessed through invasive catheterization. Lastly, non-contact biophotonic assessment of cardiac arrhythmia was demonstrated, leveraging the system's ability to extract strong hemodynamic signals for assessing subtle fluctuations in the waveform. This research demonstrates that this novel approach for computational biophotonic hemodynamic imaging offers new cardiovascular monitoring and assessment techniques, which can enable new scientific discoveries and clinical detection related to cardiovascular function
    corecore