9 research outputs found

    Assessing the kinect’s capabilities to perform a time-based clinical test for fall risk assessment in older people

    Full text link
    © 2014 IFIP International Federation for Information Processing. The Choice Stepping Reaction Time (CSRT) task is time-based clinical test that has shown to reliably predict falls in older adults. Its current mode of delivery involves the use of a custom-made dance mat device. This mat is a measurement tool that can reliably obtain step data to discriminate between fallers and non-fallers. One of the pitfalls of this test is that the technology in use still imposes an obstacle on the degree of freedom to be able to perform adaptive exercises suitable for the elderly. In this paper, we describe a Kinect-based system that measures stepping performance through the use of a hybrid version of the CSRT task. This study focuses on assessing this system’s capabilities to reliably measure a time-based clinical test of fall risk. Results showed a favorable correspondence and agreement between the two systems, suggesting that this platform could be potentially useful in the clinical practice

    Automating the timed up and go test using a depth camera

    Get PDF
    Fall prevention is a human, economic and social issue. The Timed Up and Go (TUG) test is widely used to identify individuals with a high fall risk. However, this test has been criticized because its “diagnostic” is too dependent on the conditions in which it is performed and on the healthcare professionals running it. We used the Microsoft Kinect ambient sensor to automate this test in order to reduce the subjectivity of outcome measures and to provide additional information about patient performance. Each phase of the TUG test was automatically identified from the depth images of the Kinect. Our algorithms accurately measured and assessed the elements usually measured by healthcare professionals. Specifically, average TUG test durations provided by our system differed by only 0.001 s from those measured by clinicians. In addition, our system automatically extracted several additional parameters that allowed us to accurately discriminate low and high fall risk individuals. These additional parameters notably related to the gait and turn pattern, the sitting position and the duration of each phase. Coupling our algorithms to the Kinect ambient sensor can therefore reliably be used to automate the TUG test and perform a more objective, robust and detailed assessment of fall risk

    Evaluation of home-based rehabilitation sensing systems with respect to standardised clinical tests

    Get PDF
    With increased demand for tele-rehabilitation, many autonomous home-based rehabilitation systems have appeared recently. Many of these systems, however, suffer from lack of patient acceptance and engagement or fail to provide satisfactory accuracy; both are needed for appropriate diagnostics. This paper first provides a detailed discussion of current sensor-based home-based rehabilitation systems with respect to four recently established criteria for wide acceptance and long engagement. A methodological procedure is then proposed for the evaluation of accuracy of portable sensing home-based rehabilitation systems, in line with medically-approved tests and recommendations. For experiments, we deploy an in-house low-cost sensing system meeting the four criteria of acceptance to demonstrate the effectiveness of the proposed evaluation methodology. We observe that the deployed sensor system has limitations in sensing fast movement. Indicators of enhanced motivation and engagement are recorded through the questionnaire responses with more than 83% of the respondents supporting the system’s motivation and engagement enhancement. The evaluation results demonstrate that the deployed system is fit for purpose with statistically significant ( ϱc>0.99 , R2>0.94 , ICC>0.96 ) and unbiased correlation to the golden standard

    A depth camera motion analysis framework for tele-rehabilitation : motion capture and person-centric kinematics analysis

    Get PDF
    With increasing importance given to telerehabilitation, there is a growing need for accurate, low-cost, and portable motion capture systems that do not require specialist assessment venues. This paper proposes a novel framework for motion capture using only a single depth camera, which is portable and cost effective compared to most industry-standard optical systems, without compromising on accuracy. Novel signal processing and computer vision algorithms are proposed to determine motion patterns of interest from infrared and depth data. In order to demonstrate the proposed framework’s suitability for rehabilitation, we developed a gait analysis application that depends on the underlying motion capture sub-system. Each subject’s individual kinematics parameters, which are unique to that subject, are calculated and these are stored for monitoring individual progress of the clinical therapy. Experiments were conducted on 14 different subjects, 5 healthy and 9 stroke survivors. The results show very close agreement of the resulting relevant joint angles with a 12-camera based VICON system, a mean error of at most 1.75% in detecting gait events w.r.t the manually generated ground-truth, and significant performance improvements in terms of accuracy and execution time compared to a previous Kinect-based system

    Human action recognition and mobility assessment in smart environments with RGB-D sensors

    Get PDF
    openQuesta attività di ricerca è focalizzata sullo sviluppo di algoritmi e soluzioni per ambienti intelligenti sfruttando sensori RGB e di profondità. In particolare, gli argomenti affrontati fanno riferimento alla valutazione della mobilità di un soggetto e al riconoscimento di azioni umane. Riguardo il primo tema, l'obiettivo è quello di implementare algoritmi per l'estrazione di parametri oggettivi che possano supportare la valutazione di test di mobilità svolta da personale sanitario. Il primo algoritmo proposto riguarda l'estrazione di sei joints sul piano sagittale utilizzando i dati di profondità forniti dal sensore Kinect. La precisione in termini di stima degli angoli di busto e ginocchio nella fase di sit-to-stand viene valutata considerando come riferimento un sistema stereofotogrammetrico basato su marker. Un secondo algoritmo viene proposto per facilitare la realizzazione del test in ambiente domestico e per consentire l'estrazione di un maggior numero di parametri dall'esecuzione del test Timed Up and Go. I dati di Kinect vengono combinati con quelli di un accelerometro attraverso un algoritmo di sincronizzazione, costituendo un setup che può essere utilizzato anche per altre applicazioni che possono beneficiare dell'utilizzo congiunto di dati RGB, profondità ed inerziali. Vengono quindi proposti algoritmi di rilevazione della caduta che sfruttano la stessa configurazione del Timed Up and Go test. Per quanto riguarda il secondo argomento affrontato, l'obiettivo è quello di effettuare la classificazione di azioni che possono essere compiute dalla persona all'interno di un ambiente domestico. Vengono quindi proposti due algoritmi di riconoscimento attività i quali utilizzano i joints dello scheletro di Kinect e sfruttano un SVM multiclasse per il riconoscimento di azioni appartenenti a dataset pubblicamente disponibili, raggiungendo risultati confrontabili con lo stato dell'arte rispetto ai dataset CAD-60, KARD, MSR Action3D.This research activity is focused on the development of algorithms and solutions for smart environments exploiting RGB and depth sensors. In particular, the addressed topics refer to mobility assessment of a subject and to human action recognition. Regarding the first topic, the goal is to implement algorithms for the extraction of objective parameters that can support the assessment of mobility tests performed by healthcare staff. The first proposed algorithm regards the extraction of six joints on the sagittal plane using depth data provided by Kinect sensor. The accuracy in terms of estimation of torso and knee angles in the sit-to-stand phase is evaluated considering a marker-based stereometric system as a reference. A second algorithm is proposed to simplify the test implementation in home environment and to allow the extraction of a greater number of parameters from the execution of the Timed Up and Go test. Kinect data are combined with those of an accelerometer through a synchronization algorithm constituting a setup that can be used also for other applications that benefit from the joint usage of RGB, depth and inertial data. Fall detection algorithms exploiting the same configuration of the Timed Up and Go test are therefore proposed. Regarding the second topic addressed, the goal is to perform the classification of human actions that can be carried out in home environment. Two algorithms for human action recognition are therefore proposed, which exploit skeleton joints of Kinect and a multi-class SVM for the recognition of actions belonging to publicly available datasets, achieving results comparable with the state of the art in the datasets CAD-60, KARD, MSR Action3D.INGEGNERIA DELL'INFORMAZIONECippitelli, EneaCippitelli, Ene

    Human action recognition and mobility assessment in smart environments with RGB-D sensors

    Get PDF
    Questa attività di ricerca è focalizzata sullo sviluppo di algoritmi e soluzioni per ambienti intelligenti sfruttando sensori RGB e di profondità. In particolare, gli argomenti affrontati fanno riferimento alla valutazione della mobilità di un soggetto e al riconoscimento di azioni umane. Riguardo il primo tema, l'obiettivo è quello di implementare algoritmi per l'estrazione di parametri oggettivi che possano supportare la valutazione di test di mobilità svolta da personale sanitario. Il primo algoritmo proposto riguarda l'estrazione di sei joints sul piano sagittale utilizzando i dati di profondità forniti dal sensore Kinect. La precisione in termini di stima degli angoli di busto e ginocchio nella fase di sit-to-stand viene valutata considerando come riferimento un sistema stereofotogrammetrico basato su marker. Un secondo algoritmo viene proposto per facilitare la realizzazione del test in ambiente domestico e per consentire l'estrazione di un maggior numero di parametri dall'esecuzione del test Timed Up and Go. I dati di Kinect vengono combinati con quelli di un accelerometro attraverso un algoritmo di sincronizzazione, costituendo un setup che può essere utilizzato anche per altre applicazioni che possono beneficiare dell'utilizzo congiunto di dati RGB, profondità ed inerziali. Vengono quindi proposti algoritmi di rilevazione della caduta che sfruttano la stessa configurazione del Timed Up and Go test. Per quanto riguarda il secondo argomento affrontato, l'obiettivo è quello di effettuare la classificazione di azioni che possono essere compiute dalla persona all'interno di un ambiente domestico. Vengono quindi proposti due algoritmi di riconoscimento attività i quali utilizzano i joints dello scheletro di Kinect e sfruttano un SVM multiclasse per il riconoscimento di azioni appartenenti a dataset pubblicamente disponibili, raggiungendo risultati confrontabili con lo stato dell'arte rispetto ai dataset CAD-60, KARD, MSR Action3D.This research activity is focused on the development of algorithms and solutions for smart environments exploiting RGB and depth sensors. In particular, the addressed topics refer to mobility assessment of a subject and to human action recognition. Regarding the first topic, the goal is to implement algorithms for the extraction of objective parameters that can support the assessment of mobility tests performed by healthcare staff. The first proposed algorithm regards the extraction of six joints on the sagittal plane using depth data provided by Kinect sensor. The accuracy in terms of estimation of torso and knee angles in the sit-to-stand phase is evaluated considering a marker-based stereometric system as a reference. A second algorithm is proposed to simplify the test implementation in home environment and to allow the extraction of a greater number of parameters from the execution of the Timed Up and Go test. Kinect data are combined with those of an accelerometer through a synchronization algorithm constituting a setup that can be used also for other applications that benefit from the joint usage of RGB, depth and inertial data. Fall detection algorithms exploiting the same configuration of the Timed Up and Go test are therefore proposed. Regarding the second topic addressed, the goal is to perform the classification of human actions that can be carried out in home environment. Two algorithms for human action recognition are therefore proposed, which exploit skeleton joints of Kinect and a multi-class SVM for the recognition of actions belonging to publicly available datasets, achieving results comparable with the state of the art in the datasets CAD-60, KARD, MSR Action3D

    Activity monitoring and behaviour analysis using RGB-depth sensors and wearable devices for ambient assisted living applications

    Get PDF
    Nei paesi sviluppati, la percentuale delle persone anziane è in costante crescita. Questa condizione è dovuta ai risultati raggiunti nel capo medico e nel miglioramento della qualità della vita. Con l'avanzare dell'età, le persone sono più soggette a malattie correlate con l'invecchiamento. Esse sono classificabili in tre gruppi: fisiche, sensoriali e mentali. Come diretta conseguenza dell'aumento della popolazione anziana ci sarà quindi una crescita dei costi nel sistema sanitario, che dovrà essere affrontata dalla UE nei prossimi anni. Una possibile soluzione a questa sfida è l'utilizzo della tecnologia. Questo concetto è chiamato Ambient Assisted living (AAL) e copre diverse aree quali ad esempio il supporto alla mobilità, la cura delle persone, la privacy, la sicurezza e le interazioni sociali. In questa tesi differenti sensori saranno utilizzati per mostrare, attraverso diverse applicazioni, le potenzialità della tecnologia nel contesto dell'AAL. In particolare verranno utilizzate le telecamere RGB-profondità e sensori indossabili. La prima applicazione sfrutta una telecamera di profondità per monitorare la distanza sensore-persona al fine di individuare possibili cadute. Un'implementazione alternativa usa l'informazione di profondità sincronizzata con l'accelerazione fornita da un dispositivo indossabile per classificare le attività realizzate dalla persona in due gruppi: Activity Daily Living e cadute. Al fine di valutare il fattore di rischio caduta negli anziani, la seconda applicazione usa la stessa configurazione descritta in precedenza per misurare i parametri cinematici del corpo durante un test clinico chiamato Timed Up and Go. Infine, la terza applicazione monitora i movimenti della persona durante il pasto per valutare se il soggetto sta seguendo una dieta corretta. L'informazione di profondità viene sfruttata per riconoscere particolari azioni mentre quella RGB per classificare oggetti di interesse come bicchieri o piatti presenti sul tavolo.Nowadays, in the developed countries, the percentage of the elderly is growing. This situation is a consequence of improvements in people's quality life and developments in the medical field. Because of ageing, people have higher probability to be affected by age-related diseases classified in three main groups physical, perceptual and mental. Therefore, the direct consequence is a growing of healthcare system costs and a not negligible financial sustainability issue which the EU will have to face in the next years. One possible solution to tackle this challenge is exploiting the advantages provided by the technology. This paradigm is called Ambient Assisted Living (AAL) and concerns different areas, such as mobility support, health and care, privacy and security, social environment and communication. In this thesis, two different type of sensors will be used to show the potentialities of the technology in the AAL scenario. RGB-Depth cameras and wearable devices will be studied to design affordable solutions. The first one is a fall detection system that uses the distance information between the target and the camera to monitor people inside the covered area. The application will trigger an alarm when recognizes a fall. An alternative implementation of the same solution synchronizes the information provided by a depth camera and a wearable device to classify the activities performed by the user in two groups: Activity Daily Living and fall. In order to assess the fall risk in the elderly, the second proposed application uses the previous sensors configuration to measure kinematic parameters of the body during a specific assessment test called Timed Up and Go. Finally, the third application monitor's the user's movements during an intake activity. Especially, the drinking gesture can be recognized by the system using the depth information to track the hand movements whereas the RGB stream is exploited to classify important objects placed on a table

    Skeleton Timed Up and Go

    No full text
    corecore