1,053 research outputs found

    Fall Detection Using Neural Networks

    Get PDF
    Falls inside of the home is a major concern facing the aging population. Monitoring the home environment to detect a fall can prevent profound consequences due to delayed emergency response. One option to monitor a home environment is to use a camera-based fall detection system. Conceptual designs vary from 3D positional monitoring (multi-camera monitoring) to body position and limb speed classification. Research shows varying degree of success with such concepts when designed with multi-camera setup. However, camera-based systems are inherently intrusive and costly to implement. In this research, we use a sound-based system to detect fall events. Acoustic sensors are used to monitor various sound events and feed a trained machine learning model that makes predictions of a fall events. Audio samples from the sensors are converted to frequency domain images using Mel-Frequency Cepstral Coefficients method. These images are used by a trained convolution neural network to predict a fall. A publicly available dataset of household sounds is used to train the model. Varying the model\u27s complexity, we found an optimal architecture that achieves high performance while being computationally less extensive compared to the other models with similar performance. We deployed this model in a NVIDIA Jetson Nano Developer Kit

    Exploring the Landscape of Ubiquitous In-home Health Monitoring: A Comprehensive Survey

    Full text link
    Ubiquitous in-home health monitoring systems have become popular in recent years due to the rise of digital health technologies and the growing demand for remote health monitoring. These systems enable individuals to increase their independence by allowing them to monitor their health from the home and by allowing more control over their well-being. In this study, we perform a comprehensive survey on this topic by reviewing a large number of literature in the area. We investigate these systems from various aspects, namely sensing technologies, communication technologies, intelligent and computing systems, and application areas. Specifically, we provide an overview of in-home health monitoring systems and identify their main components. We then present each component and discuss its role within in-home health monitoring systems. In addition, we provide an overview of the practical use of ubiquitous technologies in the home for health monitoring. Finally, we identify the main challenges and limitations based on the existing literature and provide eight recommendations for potential future research directions toward the development of in-home health monitoring systems. We conclude that despite extensive research on various components needed for the development of effective in-home health monitoring systems, the development of effective in-home health monitoring systems still requires further investigation.Comment: 35 pages, 5 figure

    EEG-based performance-driven adaptive automated hazard alerting system in security surveillance support

    Full text link
    Computer-vision technologies have emerged to assist security surveillance. However, automation alert/alarm systems often apply a low-beta threshold to avoid misses and generates excessive false alarms. This study proposed an adaptive hazard diagnosis and alarm system with adjustable alert threshold levels based on environmental scenarios and operator's hazard recognition performance. We recorded electroencephalogram (EEG) data during hazard recognition tasks. The linear ballistic accumulator model was used to decompose the response time into several psychological subcomponents, which were further estimated by a Markov chain Monte Carlo algorithm and compared among different types of hazardous scenarios. Participants were most cautious about falling hazards, followed by electricity hazards, and had the least conservative attitude toward structural hazards. Participants were classified into three performance-level subgroups using a latent profile analysis based on task accuracy. We applied the transfer learning paradigm to classify subgroups based on their time-frequency representations of EEG data. Additionally, two continual learning strategies were investigated to ensure a robust adaptation of the model to predict participants' performance levels in different hazardous scenarios. These findings can be leveraged in real-world brain-computer interface applications, which will provide human trust in automation and promote the successful implementation of alarm technologies

    Highly-efficient fog-based deep learning AAL fall detection system

    Full text link
    [EN] Falls is one of most concerning accidents in aged population due to its high frequency and serious repercussion; thus, quick assistance is critical to avoid serious health consequences. There are several Ambient Assisted Living (AAL) solutions that rely on the technologies of the Internet of Things (IoT), Cloud Computing and Machine Learning (ML). Recently, Deep Learning (DL) have been included for its high potential to improve accuracy on fall detection. Also, the use of fog devices for the ML inference (detecting falls) spares cloud drawback of high network latency, non-appropriate for delay-sensitive applications such as fall detectors. Though, current fall detection systems lack DL inference on the fog, and there is no evidence of it in real environments, nor documentation regarding the complex challenge of the deployment. Since DL requires considerable resources and fog nodes are resource-limited, a very efficient deployment and resource usage is critical. We present an innovative highly-efficient intelligent system based on a fog-cloud computing architecture to timely detect falls using DL technics deployed on resource-constrained devices (fog nodes). We employ a wearable tri-axial accelerometer to collect patient monitoring data. In the fog, we propose a smart-IoT-Gateway architecture to support the remote deployment and management of DL models. We deploy two DL models (LSTM/GRU) employing virtualization to optimize resources and evaluate their performance and inference time. The results prove the effectiveness of our fall system, that provides a more timely and accurate response than traditional fall detector systems, higher efficiency, 98.75% accuracy, lower delay, and service improvement.This research was supported by the Ecuadorian Government through the Secretary of Higher Education, Science, Technology, and Innovation (SENESCYT) and has received funding from the European Union's Horizon 2020 research and innovation program as part of the ACTIVAGE project under Grant 732679.Sarabia-Jácome, D.; Usach, R.; Palau Salvador, CE.; Esteve Domingo, M. (2020). Highly-efficient fog-based deep learning AAL fall detection system. Internet of Things. 11:1-19. https://doi.org/10.1016/j.iot.2020.100185S11911“World Population Ageing.” [Online]. Available: http://www.un.org/esa/population/publications/worldageing19502050/. [Accessed: 23-Sep-2018].“Falls, ” World Health Organization. [Online]. Available: http://www.who.int/news-room/fact-sheets/detail/falls. [Accessed: 20-Sep-2018].Rashidi, P., & Mihailidis, A. (2013). A Survey on Ambient-Assisted Living Tools for Older Adults. IEEE Journal of Biomedical and Health Informatics, 17(3), 579-590. doi:10.1109/jbhi.2012.2234129Bousquet, J., Kuh, D., Bewick, M., Strandberg, T., Farrell, J., Pengelly, R., … Bringer, J. (2015). Operative definition of active and healthy ageing (AHA): Meeting report. Montpellier October 20–21, 2014. European Geriatric Medicine, 6(2), 196-200. doi:10.1016/j.eurger.2014.12.006“WHO | What is Healthy Ageing?”[Online]. Available: http://www.who.int/ageing/healthy-ageing/en/. [Accessed: 19-Sep-2018].Fei, X., Shah, N., Verba, N., Chao, K.-M., Sanchez-Anguix, V., Lewandowski, J., … Usman, Z. (2019). CPS data streams analytics based on machine learning for Cloud and Fog Computing: A survey. Future Generation Computer Systems, 90, 435-450. doi:10.1016/j.future.2018.06.042W. Zaremba, “Recurrent neural network regularization,” no. 2013, pp. 1–8, 2015.Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735-1780. doi:10.1162/neco.1997.9.8.1735J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” pp. 1–9, 2014.N. Zerrouki, F. Harrou, Y. Sun, and A. Houacine, “Vision-based human action classification,” vol. 18, no. 12, pp. 5115–5121, 2018.Panahi, L., & Ghods, V. (2018). Human fall detection using machine vision techniques on RGB–D images. Biomedical Signal Processing and Control, 44, 146-153. doi:10.1016/j.bspc.2018.04.014Y. Li, K.C. Ho, and M. Popescu, “A microphone array system for automatic fall detection,” vol. 59, no. 2, pp. 1291–1301, 2012.Taramasco, C., Rodenas, T., Martinez, F., Fuentes, P., Munoz, R., Olivares, R., … Demongeot, J. (2018). A Novel Monitoring System for Fall Detection in Older People. IEEE Access, 6, 43563-43574. doi:10.1109/access.2018.2861331C. Wang et al., “Low-power fall detector using triaxial accelerometry and barometric pressure sensing,” vol. 12, no. 6, pp. 2302–2311, 2016.S.B. Khojasteh and E. De Cal, “Improving fall detection using an on-wrist wearable accelerometer,” pp. 1–28.Theodoridis, T., Solachidis, V., Vretos, N., & Daras, P. (2017). Human Fall Detection from Acceleration Measurements Using a Recurrent Neural Network. IFMBE Proceedings, 145-149. doi:10.1007/978-981-10-7419-6_25F. Sposaro and G. Tyson, “iFall : an android application for fall monitoring and response,” pp. 6119–6122, 2009.A. Ngu, Y. Wu, H. Zare, A.P. B, B. Yarbrough, and L. Yao, “Fall detection using smartwatch sensor data with accessor architecture,” vol. 2, pp. 81–93.P. Jantaraprim and P. Phukpattaranont, “Fall detection for the elderly using a support vector machine,” no. 1, pp. 484–490, 2012.Aziz, O., Musngi, M., Park, E. J., Mori, G., & Robinovitch, S. N. (2016). A comparison of accuracy of fall detection algorithms (threshold-based vs. machine learning) using waist-mounted tri-axial accelerometer signals from a comprehensive set of falls and non-fall trials. Medical & Biological Engineering & Computing, 55(1), 45-55. doi:10.1007/s11517-016-1504-yV. Carletti, A. Greco, A. Saggese, and M. Vento, “A smartphone-based system for detecting falls using anomaly detection,” vol. 6978, 2017, pp. 490–499.Yacchirema, D., de Puga, J. S., Palau, C., & Esteve, M. (2018). Fall detection system for elderly people using IoT and Big Data. Procedia Computer Science, 130, 603-610. doi:10.1016/j.procs.2018.04.11

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Elderly Fall Detection Systems: A Literature Survey

    Get PDF
    Falling is among the most damaging event elderly people may experience. With the ever-growing aging population, there is an urgent need for the development of fall detection systems. Thanks to the rapid development of sensor networks and the Internet of Things (IoT), human-computer interaction using sensor fusion has been regarded as an effective method to address the problem of fall detection. In this paper, we provide a literature survey of work conducted on elderly fall detection using sensor networks and IoT. Although there are various existing studies which focus on the fall detection with individual sensors, such as wearable ones and depth cameras, the performance of these systems are still not satisfying as they suffer mostly from high false alarms. Literature shows that fusing the signals of different sensors could result in higher accuracy and lower false alarms, while improving the robustness of such systems. We approach this survey from different perspectives, including data collection, data transmission, sensor fusion, data analysis, security, and privacy. We also review the benchmark data sets available that have been used to quantify the performance of the proposed methods. The survey is meant to provide researchers in the field of elderly fall detection using sensor networks with a summary of progress achieved up to date and to identify areas where further effort would be beneficial
    • …
    corecore