859 research outputs found

    A novel monitoring system for fall detection in older people

    Get PDF
    Indexación: Scopus.This work was supported in part by CORFO - CENS 16CTTS-66390 through the National Center on Health Information Systems, in part by the National Commission for Scientific and Technological Research (CONICYT) through the Program STIC-AMSUD 17STIC-03: ‘‘MONITORing for ehealth," FONDEF ID16I10449 ‘‘Sistema inteligente para la gestión y análisis de la dotación de camas en la red asistencial del sector público’’, and in part by MEC80170097 ‘‘Red de colaboración científica entre universidades nacionales e internacionales para la estructuración del doctorado y magister en informática médica en la Universidad de Valparaíso’’. The work of V. H. C. De Albuquerque was supported by the Brazilian National Council for Research and Development (CNPq), under Grant 304315/2017-6.Each year, more than 30% of people over 65 years-old suffer some fall. Unfortunately, this can generate physical and psychological damage, especially if they live alone and they are unable to get help. In this field, several studies have been performed aiming to alert potential falls of the older people by using different types of sensors and algorithms. In this paper, we present a novel non-invasive monitoring system for fall detection in older people who live alone. Our proposal is using very-low-resolution thermal sensors for classifying a fall and then alerting to the care staff. Also, we analyze the performance of three recurrent neural networks for fall detections: Long short-term memory (LSTM), gated recurrent unit, and Bi-LSTM. As many learning algorithms, we have performed a training phase using different test subjects. After several tests, we can observe that the Bi-LSTM approach overcome the others techniques reaching a 93% of accuracy in fall detection. We believe that the bidirectional way of the Bi-LSTM algorithm gives excellent results because the use of their data is influenced by prior and new information, which compares to LSTM and GRU. Information obtained using this system did not compromise the user's privacy, which constitutes an additional advantage of this alternative. © 2013 IEEE.https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=842305

    Multi-occupancy Fall Detection using Non-Invasive Thermal Vision Sensor

    Get PDF

    Vision-based Human Fall Detection Systems using Deep Learning: A Review

    Full text link
    Human fall is one of the very critical health issues, especially for elders and disabled people living alone. The number of elder populations is increasing steadily worldwide. Therefore, human fall detection is becoming an effective technique for assistive living for those people. For assistive living, deep learning and computer vision have been used largely. In this review article, we discuss deep learning (DL)-based state-of-the-art non-intrusive (vision-based) fall detection techniques. We also present a survey on fall detection benchmark datasets. For a clear understanding, we briefly discuss different metrics which are used to evaluate the performance of the fall detection systems. This article also gives a future direction on vision-based human fall detection techniques

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Latest research trends in gait analysis using wearable sensors and machine learning: a systematic review

    Get PDF
    Gait is the locomotion attained through the movement of limbs and gait analysis examines the patterns (normal/abnormal) depending on the gait cycle. It contributes to the development of various applications in the medical, security, sports, and fitness domains to improve the overall outcome. Among many available technologies, two emerging technologies that play a central role in modern day gait analysis are: A) wearable sensors which provide a convenient, efficient, and inexpensive way to collect data and B) Machine Learning Methods (MLMs) which enable high accuracy gait feature extraction for analysis. Given their prominent roles, this paper presents a review of the latest trends in gait analysis using wearable sensors and Machine Learning (ML). It explores the recent papers along with the publication details and key parameters such as sampling rates, MLMs, wearable sensors, number of sensors, and their locations. Furthermore, the paper provides recommendations for selecting a MLM, wearable sensor and its location for a specific application. Finally, it suggests some future directions for gait analysis and its applications

    Assessing Berry Number for Grapevine Yield Estimation by Image Analysis: Case Study with the Red Variety “Syrah”

    Get PDF
    Mestrado em Engenharia de Viticultura e Enologia (Double degree) / Instituto Superior de Agronomia. Universidade de Lisboa / Faculdade de Ciências. Universidade do PortoThe yield estimation provides information that help growers to make decisions in order to optimize crop growth and to organize the harvest operations in field and in the cellar. In most vineyard estates yield is forecasted using manual methods. However, image analysis methods, which are less invasive low cost and more representative are now being developed. The main objective of this work was to estimate yield through data obtained in the frame of Vinbot project during the 2019 season. In this thesis, images of the grapevine variety Syrah taken in the laboratory and in the vineyards of the “Instituto Superior de Agronomia” in Lisbon were analyzed. In the laboratory the images were taken manually with an RGB camera, while in the field vines were imaged either manually and by the Vinbot robot. From these images, the number of visible berries were counted with MATLAB. From the laboratory values, the relationships between the number of visible berries and actual bunch weight and berry number were studied. From the data obtained in the field, it was analyzed the visibility of the berries at different levels of defoliation and the relationship between the area of visible bunches and the visible berries. Berry-by-berry occlusion showed a value of 6.4% at pea-size, 14.5% at veraison and 25% at maturation. In addition, high and significant determination coefficient were obtained between actual yield and visible berries. The comparison of estimated yield, obtained using the regression models with actual yield, showed an underestimation at all the three phonological stages. This low accuracy of the developed models show that the use of algorithms based on visible berry number on the images to estimate yield still needs further researchN/
    corecore