18 research outputs found

    Development of a Real-Time, Simple and High-Accuracy Fall Detection System for Elderly Using 3-DOF Accelerometers

    Full text link
    © 2018, King Fahd University of Petroleum & Minerals. Falls represent a major problem for the elderly people aged 60 or above. There are many monitoring systems which are currently available to detect the fall. However, there is a great need to propose a system which is of optimal effectiveness. In this paper, we propose to develop a low-cost fall detection system to precisely detect an event when an elderly person accidentally falls. The fall detection algorithm compares the acceleration with lower fall threshold and upper fall threshold values to accurately detect a fall event. The post-fall recognition module is the combination of posture recognition and vertical velocity estimation that has been added to our proposed method to enhance the performance and accuracy. In case of a fall, our device will transmit the location information to the contacts instantly via SMS and voice call. A smartphone application will ensure that the notifications are delivered to the elderly person’s relatives so that medical attention can be provided with minimal delay. The system was tested by volunteers and achieved 100% sensitivity and accuracy. This was confirmed by testing with public datasets and it also achieved the same percentage in sensitivity and accuracy as in our recorded datasets

    New Fast Fall Detection Method Based on Spatio-Temporal Context Tracking of Head by Using Depth Images

    Get PDF
    © 2015 by the authors; licensee MDPI, Basel, Switzerland. In order to deal with the problem of projection occurring in fall detection with two-dimensional (2D) grey or color images, this paper proposed a robust fall detection method based on spatio-temporal context tracking over three-dimensional (3D) depth images that are captured by the Kinect sensor. In the pre-processing procedure, the parameters of the Single-Gauss-Model (SGM) are estimated and the coefficients of the floor plane equation are extracted from the background images. Once human subject appears in the scene, the silhouette is extracted by SGM and the foreground coefficient of ellipses is used to determine the head position. The dense spatio-temporal context (STC) algorithm is then applied to track the head position and the distance from the head to floor plane is calculated in every following frame of the depth image. When the distance is lower than an adaptive threshold, the centroid height of the human will be used as the second judgment criteria to decide whether a fall incident happened. Lastly, four groups of experiments with different falling directions are performed. Experimental results show that the proposed method can detect fall incidents that occurred in different orientations, and they only need a low computation complexity

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Fall prevention intervention technologies: A conceptual framework and survey of the state of the art

    Get PDF
    In recent years, an ever increasing range of technology-based applications have been developed with the goal of assisting in the delivery of more effective and efficient fall prevention interventions. Whilst there have been a number of studies that have surveyed technologies for a particular sub-domain of fall prevention, there is no existing research which surveys the full spectrum of falls prevention interventions and characterises the range of technologies that have augmented this landscape. This study presents a conceptual framework and survey of the state of the art of technology-based fall prevention systems which is derived from a systematic template analysis of studies presented in contemporary research literature. The framework proposes four broad categories of fall prevention intervention system: Pre-fall prevention; Post-fall prevention; Fall injury prevention; Cross-fall prevention. Other categories include, Application type, Technology deployment platform, Information sources, Deployment environment, User interface type, and Collaborative function. After presenting the conceptual framework, a detailed survey of the state of the art is presented as a function of the proposed framework. A number of research challenges emerge as a result of surveying the research literature, which include a need for: new systems that focus on overcoming extrinsic falls risk factors; systems that support the environmental risk assessment process; systems that enable patients and practitioners to develop more collaborative relationships and engage in shared decision making during falls risk assessment and prevention activities. In response to these challenges, recommendations and future research directions are proposed to overcome each respective challenge.The Royal Society, grant Ref: RG13082

    An Improved Feature-Based Method for Fall Detection

    Get PDF
    Aiming at improving the efficiency and accuracy of fall detection, this paper fuses traditional feature-based methods and Support Vector Machine (SVM). The proposed method provides two major improvements. Firstly, the classic features were adopted and together with machine learning technology form an improved and efficient fall detection method. Secondly, the definition of a threshold which needs massive experiments was now learned by the program itself. Compared with the current popular end-to-end deep learning methods, the improved feature-based method fusing machine learning technology shows great advantages in time efficiency because of the significant reduction of the input parameters. Additionally, with the help of SVM, the thresholds need no manual definition, which saves a lot of time and makes it more precise. Our approach is evaluated on a public dataset, TST fall detection dataset v2. The results show that our approach achieves an accuracy of 93.56%, which is better than other typical methods. Furthermore, the approach can be used in real-time video surveillance because of its time efficiency and robustness

    Fall detection using history triple features

    Full text link
    Accurate identification and timely handling of involuntary events, such as falls, plays a crucial part in effective as-sistive environment systems. Fall detection, in particular, is quite critical, especially in households of lonely elderly people. However, the task of visually identifying a fall is challenging as there is a variety of daily activities that can be mistakenly characterized as falls. To tackle this issue, various feature extraction methods that aim to effectively distinguish unintentional falls from other everyday activi-ties have been proposed. In this study, we examine the capability of the History Triple Features technique based on Trace transform, to provide noise robust and invariant to different variations features for the spatiotemporal represen-tation of fall occurrences. The aim is to effectively detect falls among other household-related activities that usually take place indoors. For the evaluation of the algorithm the video sequences from two realistic fall detection datasets of different nature have been used. One is constructed using a ceiling mounted depth camera and the other is constructed using an RGB camera placed on arbitrary positions in dif-ferent rooms. After forming the feature vectors, we train a support vector machine using a radial basis function kernel. Results show a very good response of the algorithm achiev-ing 100 % on both datasets indicating the suitability of the technique to the specific task. 1

    A Survey on RGBD-based Fall Detection

    Get PDF
    跌倒是老年人由于身体状况的下降,随着年龄的增长,经常会发生的一个高危动作。跌倒对于老年人的身心健康有着严重的危害,为了解决该问题,可以实时自动检测跌倒动作系统的研究,逐渐增多。近几年基于rgbd深度相机的跌倒检测研究情况综述也逐渐增多。区别于其他检测方式,基于深度信息的视觉设备检测,可以减少因光照和阴影所带来的误差,也可以使原始数据更加丰富,提取更多特征。针对目前检测算法的实现策略和研究现状,总结公开的数据库,跌倒检测的发展趋势进行展望和讨论,有助于后续研究的进展。With the age growing, the elder's body will become worse, which often cause dangerous fall.Falling may cause physi-cal and psychological damage to the elder people.In order to solve the problem, many research on real-time automatic fall detec-tion system are proposed.Unlike other approaches, system based on RGBD sensor have many advantages, which decrease the ef-fect of illumination and shadow.What's more RGBD sensor can bring potentially rich set of data features.Therefore, a review ofrecent research on RGBD based fall detection and discussing the trend, challenges and difficulties of the fall detection will benefitthe following research

    Development of a human fall detection system based on depth maps

    Get PDF
    Assistive care related products are increasingly in demand with the recent developments in health sector associated technologies. There are several studies concerned in improving and eliminating barriers in providing quality health care services to all people, especially elderly who live alone and those who cannot move from their home for various reasons such as disable, overweight. Among them, human fall detection systems play an important role in our daily life, because fall is the main obstacle for elderly people to live independently and it is also a major health concern due to aging population. The three basic approaches used to develop human fall detection systems include some sort of wearable devices, ambient based devices or non-invasive vision based devices using live cameras. Most of such systems are either based on wearable or ambient sensor which is very often rejected by users due to the high false alarm and difficulties in carrying them during their daily life activities. Thus, this study proposes a non-invasive human fall detection system based on the height, velocity, statistical analysis, fall risk factors and position of the subject using depth information from Microsoft Kinect sensor. Classification of human fall from other activities of daily life is accomplished using height and velocity of the subject extracted from the depth information after considering the fall risk level of the user. Acceleration and activity detection are also employed if velocity and height fail to classify the activity. Finally position of the subject is identified for fall confirmation or statistical analysis is conducted to verify the fall event. From the experimental results, the proposed system was able to achieve an average accuracy of 98.3% with sensitivity of 100% and specificity of 97.7%. The proposed system accurately distinguished all the fall events from other activities of daily life

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Personalized data analytics for internet-of-things-based health monitoring

    Get PDF
    The Internet-of-Things (IoT) has great potential to fundamentally alter the delivery of modern healthcare, enabling healthcare solutions outside the limits of conventional clinical settings. It can offer ubiquitous monitoring to at-risk population groups and allow diagnostic care, preventive care, and early intervention in everyday life. These services can have profound impacts on many aspects of health and well-being. However, this field is still at an infancy stage, and the use of IoT-based systems in real-world healthcare applications introduces new challenges. Healthcare applications necessitate satisfactory quality attributes such as reliability and accuracy due to their mission-critical nature, while at the same time, IoT-based systems mostly operate over constrained shared sensing, communication, and computing resources. There is a need to investigate this synergy between the IoT technologies and healthcare applications from a user-centered perspective. Such a study should examine the role and requirements of IoT-based systems in real-world health monitoring applications. Moreover, conventional computing architecture and data analytic approaches introduced for IoT systems are insufficient when used to target health and well-being purposes, as they are unable to overcome the limitations of IoT systems while fulfilling the needs of healthcare applications. This thesis aims to address these issues by proposing an intelligent use of data and computing resources in IoT-based systems, which can lead to a high-level performance and satisfy the stringent requirements. For this purpose, this thesis first delves into the state-of-the-art IoT-enabled healthcare systems proposed for in-home and in-hospital monitoring. The findings are analyzed and categorized into different domains from a user-centered perspective. The selection of home-based applications is focused on the monitoring of the elderly who require more remote care and support compared to other groups of people. In contrast, the hospital-based applications include the role of existing IoT in patient monitoring and hospital management systems. Then, the objectives and requirements of each domain are investigated and discussed. This thesis proposes personalized data analytic approaches to fulfill the requirements and meet the objectives of IoT-based healthcare systems. In this regard, a new computing architecture is introduced, using computing resources in different layers of IoT to provide a high level of availability and accuracy for healthcare services. This architecture allows the hierarchical partitioning of machine learning algorithms in these systems and enables an adaptive system behavior with respect to the user's condition. In addition, personalized data fusion and modeling techniques are presented, exploiting multivariate and longitudinal data in IoT systems to improve the quality attributes of healthcare applications. First, a real-time missing data resilient decision-making technique is proposed for health monitoring systems. The technique tailors various data resources in IoT systems to accurately estimate health decisions despite missing data in the monitoring. Second, a personalized model is presented, enabling variations and event detection in long-term monitoring systems. The model evaluates the sleep quality of users according to their own historical data. Finally, the performance of the computing architecture and the techniques are evaluated in this thesis using two case studies. The first case study consists of real-time arrhythmia detection in electrocardiography signals collected from patients suffering from cardiovascular diseases. The second case study is continuous maternal health monitoring during pregnancy and postpartum. It includes a real human subject trial carried out with twenty pregnant women for seven months
    corecore