1,562 research outputs found

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Is the timed-up and go test feasible in mobile devices? A systematic review

    Get PDF
    The number of older adults is increasing worldwide, and it is expected that by 2050 over 2 billion individuals will be more than 60 years old. Older adults are exposed to numerous pathological problems such as Parkinson’s disease, amyotrophic lateral sclerosis, post-stroke, and orthopedic disturbances. Several physiotherapy methods that involve measurement of movements, such as the Timed-Up and Go test, can be done to support efficient and effective evaluation of pathological symptoms and promotion of health and well-being. In this systematic review, the authors aim to determine how the inertial sensors embedded in mobile devices are employed for the measurement of the different parameters involved in the Timed-Up and Go test. The main contribution of this paper consists of the identification of the different studies that utilize the sensors available in mobile devices for the measurement of the results of the Timed-Up and Go test. The results show that mobile devices embedded motion sensors can be used for these types of studies and the most commonly used sensors are the magnetometer, accelerometer, and gyroscope available in off-the-shelf smartphones. The features analyzed in this paper are categorized as quantitative, quantitative + statistic, dynamic balance, gait properties, state transitions, and raw statistics. These features utilize the accelerometer and gyroscope sensors and facilitate recognition of daily activities, accidents such as falling, some diseases, as well as the measurement of the subject's performance during the test execution.info:eu-repo/semantics/publishedVersio

    Human action recognition and mobility assessment in smart environments with RGB-D sensors

    Get PDF
    openQuesta attività di ricerca è focalizzata sullo sviluppo di algoritmi e soluzioni per ambienti intelligenti sfruttando sensori RGB e di profondità. In particolare, gli argomenti affrontati fanno riferimento alla valutazione della mobilità di un soggetto e al riconoscimento di azioni umane. Riguardo il primo tema, l'obiettivo è quello di implementare algoritmi per l'estrazione di parametri oggettivi che possano supportare la valutazione di test di mobilità svolta da personale sanitario. Il primo algoritmo proposto riguarda l'estrazione di sei joints sul piano sagittale utilizzando i dati di profondità forniti dal sensore Kinect. La precisione in termini di stima degli angoli di busto e ginocchio nella fase di sit-to-stand viene valutata considerando come riferimento un sistema stereofotogrammetrico basato su marker. Un secondo algoritmo viene proposto per facilitare la realizzazione del test in ambiente domestico e per consentire l'estrazione di un maggior numero di parametri dall'esecuzione del test Timed Up and Go. I dati di Kinect vengono combinati con quelli di un accelerometro attraverso un algoritmo di sincronizzazione, costituendo un setup che può essere utilizzato anche per altre applicazioni che possono beneficiare dell'utilizzo congiunto di dati RGB, profondità ed inerziali. Vengono quindi proposti algoritmi di rilevazione della caduta che sfruttano la stessa configurazione del Timed Up and Go test. Per quanto riguarda il secondo argomento affrontato, l'obiettivo è quello di effettuare la classificazione di azioni che possono essere compiute dalla persona all'interno di un ambiente domestico. Vengono quindi proposti due algoritmi di riconoscimento attività i quali utilizzano i joints dello scheletro di Kinect e sfruttano un SVM multiclasse per il riconoscimento di azioni appartenenti a dataset pubblicamente disponibili, raggiungendo risultati confrontabili con lo stato dell'arte rispetto ai dataset CAD-60, KARD, MSR Action3D.This research activity is focused on the development of algorithms and solutions for smart environments exploiting RGB and depth sensors. In particular, the addressed topics refer to mobility assessment of a subject and to human action recognition. Regarding the first topic, the goal is to implement algorithms for the extraction of objective parameters that can support the assessment of mobility tests performed by healthcare staff. The first proposed algorithm regards the extraction of six joints on the sagittal plane using depth data provided by Kinect sensor. The accuracy in terms of estimation of torso and knee angles in the sit-to-stand phase is evaluated considering a marker-based stereometric system as a reference. A second algorithm is proposed to simplify the test implementation in home environment and to allow the extraction of a greater number of parameters from the execution of the Timed Up and Go test. Kinect data are combined with those of an accelerometer through a synchronization algorithm constituting a setup that can be used also for other applications that benefit from the joint usage of RGB, depth and inertial data. Fall detection algorithms exploiting the same configuration of the Timed Up and Go test are therefore proposed. Regarding the second topic addressed, the goal is to perform the classification of human actions that can be carried out in home environment. Two algorithms for human action recognition are therefore proposed, which exploit skeleton joints of Kinect and a multi-class SVM for the recognition of actions belonging to publicly available datasets, achieving results comparable with the state of the art in the datasets CAD-60, KARD, MSR Action3D.INGEGNERIA DELL'INFORMAZIONECippitelli, EneaCippitelli, Ene

    Human action recognition and mobility assessment in smart environments with RGB-D sensors

    Get PDF
    Questa attività di ricerca è focalizzata sullo sviluppo di algoritmi e soluzioni per ambienti intelligenti sfruttando sensori RGB e di profondità. In particolare, gli argomenti affrontati fanno riferimento alla valutazione della mobilità di un soggetto e al riconoscimento di azioni umane. Riguardo il primo tema, l'obiettivo è quello di implementare algoritmi per l'estrazione di parametri oggettivi che possano supportare la valutazione di test di mobilità svolta da personale sanitario. Il primo algoritmo proposto riguarda l'estrazione di sei joints sul piano sagittale utilizzando i dati di profondità forniti dal sensore Kinect. La precisione in termini di stima degli angoli di busto e ginocchio nella fase di sit-to-stand viene valutata considerando come riferimento un sistema stereofotogrammetrico basato su marker. Un secondo algoritmo viene proposto per facilitare la realizzazione del test in ambiente domestico e per consentire l'estrazione di un maggior numero di parametri dall'esecuzione del test Timed Up and Go. I dati di Kinect vengono combinati con quelli di un accelerometro attraverso un algoritmo di sincronizzazione, costituendo un setup che può essere utilizzato anche per altre applicazioni che possono beneficiare dell'utilizzo congiunto di dati RGB, profondità ed inerziali. Vengono quindi proposti algoritmi di rilevazione della caduta che sfruttano la stessa configurazione del Timed Up and Go test. Per quanto riguarda il secondo argomento affrontato, l'obiettivo è quello di effettuare la classificazione di azioni che possono essere compiute dalla persona all'interno di un ambiente domestico. Vengono quindi proposti due algoritmi di riconoscimento attività i quali utilizzano i joints dello scheletro di Kinect e sfruttano un SVM multiclasse per il riconoscimento di azioni appartenenti a dataset pubblicamente disponibili, raggiungendo risultati confrontabili con lo stato dell'arte rispetto ai dataset CAD-60, KARD, MSR Action3D.This research activity is focused on the development of algorithms and solutions for smart environments exploiting RGB and depth sensors. In particular, the addressed topics refer to mobility assessment of a subject and to human action recognition. Regarding the first topic, the goal is to implement algorithms for the extraction of objective parameters that can support the assessment of mobility tests performed by healthcare staff. The first proposed algorithm regards the extraction of six joints on the sagittal plane using depth data provided by Kinect sensor. The accuracy in terms of estimation of torso and knee angles in the sit-to-stand phase is evaluated considering a marker-based stereometric system as a reference. A second algorithm is proposed to simplify the test implementation in home environment and to allow the extraction of a greater number of parameters from the execution of the Timed Up and Go test. Kinect data are combined with those of an accelerometer through a synchronization algorithm constituting a setup that can be used also for other applications that benefit from the joint usage of RGB, depth and inertial data. Fall detection algorithms exploiting the same configuration of the Timed Up and Go test are therefore proposed. Regarding the second topic addressed, the goal is to perform the classification of human actions that can be carried out in home environment. Two algorithms for human action recognition are therefore proposed, which exploit skeleton joints of Kinect and a multi-class SVM for the recognition of actions belonging to publicly available datasets, achieving results comparable with the state of the art in the datasets CAD-60, KARD, MSR Action3D

    Recent Advances in Motion Analysis

    Get PDF
    The advances in the technology and methodology for human movement capture and analysis over the last decade have been remarkable. Besides acknowledged approaches for kinematic, dynamic, and electromyographic (EMG) analysis carried out in the laboratory, more recently developed devices, such as wearables, inertial measurement units, ambient sensors, and cameras or depth sensors, have been adopted on a wide scale. Furthermore, computational intelligence (CI) methods, such as artificial neural networks, have recently emerged as promising tools for the development and application of intelligent systems in motion analysis. Thus, the synergy of classic instrumentation and novel smart devices and techniques has created unique capabilities in the continuous monitoring of motor behaviors in different fields, such as clinics, sports, and ergonomics. However, real-time sensing, signal processing, human activity recognition, and characterization and interpretation of motion metrics and behaviors from sensor data still representing a challenging problem not only in laboratories but also at home and in the community. This book addresses open research issues related to the improvement of classic approaches and the development of novel technologies and techniques in the domain of motion analysis in all the various fields of application

    Elderly Fall Detection Systems: A Literature Survey

    Get PDF
    Falling is among the most damaging event elderly people may experience. With the ever-growing aging population, there is an urgent need for the development of fall detection systems. Thanks to the rapid development of sensor networks and the Internet of Things (IoT), human-computer interaction using sensor fusion has been regarded as an effective method to address the problem of fall detection. In this paper, we provide a literature survey of work conducted on elderly fall detection using sensor networks and IoT. Although there are various existing studies which focus on the fall detection with individual sensors, such as wearable ones and depth cameras, the performance of these systems are still not satisfying as they suffer mostly from high false alarms. Literature shows that fusing the signals of different sensors could result in higher accuracy and lower false alarms, while improving the robustness of such systems. We approach this survey from different perspectives, including data collection, data transmission, sensor fusion, data analysis, security, and privacy. We also review the benchmark data sets available that have been used to quantify the performance of the proposed methods. The survey is meant to provide researchers in the field of elderly fall detection using sensor networks with a summary of progress achieved up to date and to identify areas where further effort would be beneficial

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Instrumentation and validation of a robotic cane for transportation and fall prevention in patients with affected mobility

    Get PDF
    Dissertação de mestrado integrado em Engenharia Física, (especialização em Dispositivos, Microssistemas e Nanotecnologias)O ato de andar é conhecido por ser a forma primitiva de locomoção do ser humano, sendo que este traz muitos benefícios que motivam um estilo de vida saudável e ativo. No entanto, há condições de saúde que dificultam a realização da marcha, o que por consequência pode resultar num agravamento da saúde, e adicionalmente, levar a um maior risco de quedas. Nesse sentido, o desenvolvimento de um sistema de deteção e prevenção de quedas, integrado num dispositivo auxiliar de marcha, seria essencial para reduzir estes eventos de quedas e melhorar a qualidade de vida das pessoas. Para ultrapassar estas necessidades e limitações, esta dissertação tem como objetivo validar e instrumentar uma bengala robótica, denominada Anti-fall Robotic Cane (ARCane), concebida para incorporar um sistema de deteção de quedas e um mecanismo de atuação que possibilite a prevenção de quedas, ao mesmo tempo que assiste a marcha. Para esse fim, foi realizada uma revisão do estado da arte em bengalas robóticas para adquirir um conhecimento amplo e aprofundado dos componentes, mecanismos e estratégias utilizadas, bem como os protocolos experimentais, principais resultados, limitações e desafios em dispositivos existentes. Numa primeira fase, foi estipulado o objetivo de: (i) adaptar a missão do produto; (ii) estudar as necessidades do consumidor; e (iii) atualizar as especificações alvo da ARCane, continuação do trabalho de equipa, para obter um produto com design e engenharia compatível com o mercado. Foi depois estabelecida a arquitetura de hardware e discutidos os componentes a ser instrumentados na ARCane. Em seguida foram realizados testes de interoperabilidade a fim de validar o funcionamento singular e coletivo dos componentes. Relativamente ao controlo de movimento, foi desenvolvido um sistema inovador, de baixo custo e intuitivo, capaz de detetar a intenção do movimento e de reconhecer as fases da marcha do utilizador. Esta implementação foi validada com seis voluntários saudáveis que realizaram testes de marcha com a ARCane para testar sua operabilidade num ambiente de contexto real. Obteve-se uma precisão de 97% e de 90% em relação à deteção da intenção de movimento e ao reconhecimento da fase da marcha do utilizador. Por fim, foi projetado um método de deteção de quedas e mecanismo de prevenção de quedas para futura implementação na ARCane. Foi ainda proposta uma melhoria do método de deteção de quedas, de modo a superar as limitações associadas, bem como a proposta de dispositivos de deteção a serem implementados na ARCane para obter um sistema completo de deteção de quedas.The act of walking is known to be the primitive form of the human being, and it brings many benefits that motivate a healthy and active lifestyle. However, there are health conditions that make walking difficult, which, consequently, can result in worse health and, in addition, lead to a greater risk of falls. Thus, the development of a fall detection and prevention system integrated with a walking aid would be essential to reduce these fall events and improve people quality of life. To overcome these needs and limitations, this dissertation aims to validate and instrument a cane-type robot, called Anti-fall Robotic Cane (ARCane), designed to incorporate a fall detection system and an actuation mechanism that allow the prevention of falls, while assisting the gait. Therefore, a State-of-the-Art review concerning robotic canes was carried out to acquire a broad and in-depth knowledge of the used components, mechanisms and strategies, as well as the experimental protocols, main results, limitations and challenges on existing devices. On a first stage, it was set an objective to (i) enhance the product's mission statement; (ii) study the consumer needs; and (iii) update the target specifications of the ARCane, extending teamwork, to obtain a product with a market-compatible design and engineering that meets the needs and desires of the ARCane users. It was then established the hardware architecture of the ARCane and discussed the electronic components that will instrument the control, sensory, actuator and power units, being afterwards subjected to interoperability tests to validate the singular and collective functioning of cane components altogether. Regarding the motion control of robotic canes, an innovative, cost-effective and intuitive motion control system was developed, providing user movement intention recognition, and identification of the user's gait phases. This implementation was validated with six healthy volunteers who carried out gait trials with the ARCane, in order to test its operability in a real context environment. An accuracy of 97% was achieved for user motion intention recognition and 90% for user gait phase recognition, using the proposed motion control system. Finally, it was idealized a fall detection method and fall prevention mechanism for a future implementation in the ARCane, based on methods applied to robotic canes in the literature. It was also proposed an improvement of the fall detection method in order to overcome its associated limitations, as well as detection devices to be implemented into the ARCane to achieve a complete fall detection system

    A multimodal dataset of real world mobility activities in Parkinson’s disease

    Get PDF
    Parkinson’s disease (PD) is a neurodegenerative disorder characterised by motor symptoms such as gait dysfunction and postural instability. Technological tools to continuously monitor outcomes could capture the hour-by-hour symptom fluctuations of PD. Development of such tools is hampered by the lack of labelled datasets from home settings. To this end, we propose REMAP (REal-world Mobility Activities in Parkinson’s disease), a human rater-labelled dataset collected in a home-like setting. It includes people with and without PD doing sit-to-stand transitions and turns in gait. These discrete activities are captured from periods of free-living (unobserved, unstructured) and during clinical assessments. The PD participants withheld their dopaminergic medications for a time (causing increased symptoms), so their activities are labelled as being “on” or “off” medications. Accelerometry from wrist-worn wearables and skeleton pose video data is included. We present an open dataset, where the data is coarsened to reduce re-identifiability, and a controlled dataset available on application which contains more refined data. A use-case for the data to estimate sit-to-stand speed and duration is illustrated
    corecore