165 research outputs found

    Fall prevention intervention technologies: A conceptual framework and survey of the state of the art

    Get PDF
    In recent years, an ever increasing range of technology-based applications have been developed with the goal of assisting in the delivery of more effective and efficient fall prevention interventions. Whilst there have been a number of studies that have surveyed technologies for a particular sub-domain of fall prevention, there is no existing research which surveys the full spectrum of falls prevention interventions and characterises the range of technologies that have augmented this landscape. This study presents a conceptual framework and survey of the state of the art of technology-based fall prevention systems which is derived from a systematic template analysis of studies presented in contemporary research literature. The framework proposes four broad categories of fall prevention intervention system: Pre-fall prevention; Post-fall prevention; Fall injury prevention; Cross-fall prevention. Other categories include, Application type, Technology deployment platform, Information sources, Deployment environment, User interface type, and Collaborative function. After presenting the conceptual framework, a detailed survey of the state of the art is presented as a function of the proposed framework. A number of research challenges emerge as a result of surveying the research literature, which include a need for: new systems that focus on overcoming extrinsic falls risk factors; systems that support the environmental risk assessment process; systems that enable patients and practitioners to develop more collaborative relationships and engage in shared decision making during falls risk assessment and prevention activities. In response to these challenges, recommendations and future research directions are proposed to overcome each respective challenge.The Royal Society, grant Ref: RG13082

    Automatic Assessment of Environmental Hazards for Fall Prevention Using Smart-Cameras

    Get PDF
    As technology advances in the field of Computer Vision, new applications will emerge. One device that has emerged is the smart-camera, a camera attached to an embedded system that can perform routines a regular camera could not, such as object or event detection. In this thesis we describe a smart-camera system we designed, implemented, and evaluated for fall prevention monitoring of at-risk people while in bed, whether it be for a hospital patient, nursing home resident, or at home elderly resident. The camera will give a nurse or caregiver environmental awareness of the at-risk person and notify them when that person performs an action that could lead to a hazardous event. This camera uses Haar Cascade facial detection techniques, Histogram of Oriented Gradients(HOG) for person detection, and Mixture of Gaussians (MOG) background subtraction while operating. Regions are created by a person from a graphical user interface (GUI). The camera looks within these regions to find a face, a standing person, or just a change in the image. A notification is sent to the smartphone of the nurse or caregiver of the corresponding at-risk person when the camera finds one of these three detections in the drawn region. The Cloud is utilized to send the notification to the nurse or caregiver’s smartphone. Given a properly placed camera and properly drawn regions, notifications can be sent when the at-risk person is doing an action that demands the attention of the nurse or caregiver, such as getting out of bed. The smart-camera does contain drawbacks. It is likely to give alerts when visitors are in the room, and it does not know how to pause notifications when a nurse, doctor, or caregiver comes into the room

    Personalized functional health and fall risk prediction using electronic health records and in-home sensor data

    Get PDF
    Research has shown the importance of Electronic Health Records (EHR) and in-home sensor data for continuous health tracking and health risk predictions. With the increased computational capabilities and advances in machine learning techniques, we have new opportunities to use multi-modal health big data to develop accurate health tracking models. This dissertation describes the development, evaluation, and testing of systems for predicting functional health and fall risks in community-dwelling older adults using health data and machine learning techniques. In an initial study, we focused on organizing and de-identifying EHR data for analysis using HIPAA regulations. The dataset contained nine years of structured and unstructured EHR data obtained from TigerPlace, a senior living facility at Columbia, MO. The de-identification of this data was done using custom automated algorithms. The de-identified EHR data was used in several studies described in this dissertation. We then developed personalized functional health tracking models using geriatric assessments in the EHR data. Studies show that higher levels of functional health in older adults lead to a higher quality of life and improves the ability to age-in-place. Even though several geriatric assessments capture several aspects of functional health, there is limited research in longitudinally tracking the personalized functional health of older adults using a combination of these assessments. In this study, data from 150 older adult residents were used to develop a composite functional health prediction model using Activities of Daily Living (ADL), Instrumental Activities of Daily Living (IADL), Mini-Mental State Examination (MMSE), Geriatric Depression Scale (GDS), and Short Form 12 (SF12). Tracking functional health objectively could help clinicians to make decisions for interventions in case of functional health deterioration. We next constructed models for fall risk prediction in older adults using geriatric assessments, demographic data, and GAITRite assessment data. A 6-month fall risk prediction model was developed with data from 93 older adult residents. Explainable AI techniques were used to provide explanations to the model predictions, such as which specific features increased the risk of fall in a particular model prediction. Such explanations to model predictions provide valuable insights for targeted interventions. In another study, we developed deep neural network models to predict fall risk from de-identified nursing notes data from 162 older adult residents from TigerPlace. Clinical nursing notes have been shown to contain valuable information related to fall risk factors. This analysis provides the groundwork for future experiments to predict fall risk in older adults using clinical notes. In addition to using EHR data to predict functional health and fall risk in older adults, two studies were conducted to predict fall and functional health from in-home sensor data. Models for in-home fall prediction using depth sensor imagery have been successfully used at TigerPlace. However, the model is prone to false fall alarms in several scenarios, such as pillows thrown on the floor and pets jumping from couches. A secondary fall analysis was performed by analyzing fall alert videos to further identify and remove false alarms. In the final study, we used in-home sensor data streaming from depth sensors and bed sensors to predict functional health and absolute geriatric assessment values. These prediction models can be used to predict the functional health of residents in absence of sparse and infrequent geriatric assessments. This can also provide continuous tracking of functional health in older adults using the streaming in-home sensor data

    Early diagnosis of frailty: Technological and non-intrusive devices for clinical detection

    Get PDF
    This work analyses different concepts for frailty diagnosis based on affordable standard technology such as smartphones or wearable devices. The goal is to provide ideas that go beyond classical diagnostic tools such as magnetic resonance imaging or tomography, thus changing the paradigm; enabling the detection of frailty without expensive facilities, in an ecological way for both patients and medical staff and even with continuous monitoring. Fried's five-point phenotype model of frailty along with a model based on trials and several classical physical tests were used for device classification. This work provides a starting point for future researchers who will have to try to bridge the gap separating elderly people from technology and medical tests in order to provide feasible, accurate and affordable tools for frailty monitoring for a wide range of users.This work was sponsored by the Spanish Ministry of Science, Innovation and Universities and the European Regional Development Fund (ERDF) across projects RTC-2017-6321-1 AEI/FEDER, UE, TEC2016-76021-C2-2-R AEI/FEDER, UE and PID2019-107270RB-C21/AEI/10.13039/501100011033, UE

    A Framework for Students Profile Detection

    Get PDF
    Some of the biggest problems tackling Higher Education Institutions are students’ drop-out and academic disengagement. Physical or psychological disabilities, social-economic or academic marginalization, and emotional and affective problems, are some of the factors that can lead to it. This problematic is worsened by the shortage of educational resources, that can bridge the communication gap between the faculty staff and the affective needs of these students. This dissertation focus in the development of a framework, capable of collecting analytic data, from an array of emotions, affects and behaviours, acquired either by human observations, like a teacher in a classroom or a psychologist, or by electronic sensors and automatic analysis software, such as eye tracking devices, emotion detection through facial expression recognition software, automatic gait and posture detection, and others. The framework establishes the guidance to compile the gathered data in an ontology, to enable the extraction of patterns outliers via machine learning, which assist the profiling of students in critical situations, like disengagement, attention deficit, drop-out, and other sociological issues. Consequently, it is possible to set real-time alerts when these profiles conditions are detected, so that appropriate experts could verify the situation and employ effective procedures. The goal is that, by providing insightful real-time cognitive data and facilitating the profiling of the students’ problems, a faster personalized response to help the student is enabled, allowing academic performance improvements

    A mobile cloud computing framework integrating multilevel encoding for performance monitoring in telerehabilitation

    Full text link
    Recent years have witnessed a surge in telerehabilitation and remote healthcare systems blessed by the emerging low-cost wearable devices to monitor biological and biokinematic aspects of human beings. Although such telerehabilitation systems utilise cloud computing features and provide automatic biofeedback and performance evaluation, there are demands for overall optimisation to enable these systems to operate with low battery consumption and low computational power and even with weak or no network connections. This paper proposes a novel multilevel data encoding scheme satisfying these requirements in mobile cloud computing applications, particularly in the field of telerehabilitation. We introduce architecture for telerehabilitation platform utilising the proposed encoding scheme integrated with various types of sensors. The platform is usable not only for patients to experience telerehabilitation services but also for therapists to acquire essential support from analysis oriented decision support system (AODSS) for more thorough analysis and making further decisions on treatment

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Sensing and Signal Processing in Smart Healthcare

    Get PDF
    In the last decade, we have witnessed the rapid development of electronic technologies that are transforming our daily lives. Such technologies are often integrated with various sensors that facilitate the collection of human motion and physiological data and are equipped with wireless communication modules such as Bluetooth, radio frequency identification, and near-field communication. In smart healthcare applications, designing ergonomic and intuitive human–computer interfaces is crucial because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high level of confidence in order for the applications to be useful for clinicians in making diagnosis and treatment decisions. This Special Issue is a collection of 10 articles selected from a total of 26 contributions. These contributions span the areas of signal processing and smart healthcare systems mostly contributed by authors from Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. Authors from China, Korea, Taiwan, Indonesia, and Ecuador are also included

    Development of a human fall detection system based on depth maps

    Get PDF
    Assistive care related products are increasingly in demand with the recent developments in health sector associated technologies. There are several studies concerned in improving and eliminating barriers in providing quality health care services to all people, especially elderly who live alone and those who cannot move from their home for various reasons such as disable, overweight. Among them, human fall detection systems play an important role in our daily life, because fall is the main obstacle for elderly people to live independently and it is also a major health concern due to aging population. The three basic approaches used to develop human fall detection systems include some sort of wearable devices, ambient based devices or non-invasive vision based devices using live cameras. Most of such systems are either based on wearable or ambient sensor which is very often rejected by users due to the high false alarm and difficulties in carrying them during their daily life activities. Thus, this study proposes a non-invasive human fall detection system based on the height, velocity, statistical analysis, fall risk factors and position of the subject using depth information from Microsoft Kinect sensor. Classification of human fall from other activities of daily life is accomplished using height and velocity of the subject extracted from the depth information after considering the fall risk level of the user. Acceleration and activity detection are also employed if velocity and height fail to classify the activity. Finally position of the subject is identified for fall confirmation or statistical analysis is conducted to verify the fall event. From the experimental results, the proposed system was able to achieve an average accuracy of 98.3% with sensitivity of 100% and specificity of 97.7%. The proposed system accurately distinguished all the fall events from other activities of daily life

    Deriving information from spatial sampling floor-based personnel detection system

    Get PDF
    Field of study: Electrical & computer engineering.Dr. Harry W. Tyrer, Thesis Supervisor.Includes vita."May 2017."Research has shown and identified a clear link between human gait characteristics and different medical conditions. Therefore, a change in certain gait parameters may be predictive of future falls and adverse events in older adults such as physical functional decline and fall risks. We describe a system that is unobtrusive and continuously monitors the gait during daily activities of elderly people. The early assessment of gait decline will benefit the senior by providing an indication of the risk of falls. We developed a low cost floor-based personnel detection system; we call a smart carpet, which consists of a sensor pad placed under a carpet; the electronics reads walking activity. The smart carpet systems is used as a component of an automated health monitoring system, which helps enable independent living for elderly people and provide a practical environment that improves quality of life, reduces healthcare costs and promotes independence. In this dissertation, we extended the functionalities of the smart carpet to improve its ability to detect falls, estimate gait parameters and compared it to GAITRite system. We counted number of people walking on the carpet in order to distinguish the plurality of people from fall event. Additionally we studied the characteristics and the behavior of the sensor's scavenged signal. Results showed that our system detects falls, using computational intelligence techniques, with 96.2% accuracy and 81% sensitivity and 97.8% specificity. The system reliably estimates the gait parameters; walking speed, stride length and stride time with percentage errors of 1.43%, -4.32%, and -5.73% respectively. Our system can count the number of people on the carpet with high accuracy, and we ran tests with up to four people. We were able to use computational features of the generated waveform, by extracting the Mel Frequency Cepstral Coefficients (MFCC), and using formal computation intelligence to distinguish different people with an average accuracy of 82%, given that the experiments were performed within the same day.Includes bibliographical references
    • …
    corecore