111 research outputs found

    Augmented reality system with application in physical rehabilitation

    Get PDF
    The aging phenomenon causes increased physiotherapy services requirements, with increased costs associated with long rehabilitation periods. Traditional rehabilitation methods rely on the subjective assessment of physiotherapists without supported training data. To overcome the shortcoming of traditional rehabilitation method and improve the efficiency of rehabilitation, AR (Augmented Reality) which represents a promissory technology that provides an immersive interaction with real and virtual objects is used. The AR devices may assure the capture body posture and scan the real environment that conducted to a growing number of AR applications focused on physical rehabilitation. In this MSc thesis, an AR platform used to materialize a physical rehabilitation plan for stroke patients is presented. Gait training is a significant part of physical rehabilitation for stroke patients. AR represents a promissory solution for training assessment providing information to the patients and physiotherapists about exercises to be done and the reached results. As part of MSc work an iOS application was developed in unity 3D platform. This application immersing patients in a mixed environment that combine real-world and virtual objects. The human computer interface is materialized by an iPhone as head-mounted 3D display and a set of wireless sensors for physiological and motion parameters measurement. The position and velocity of the patient are recorded by a smart carpet that includes capacitive sensors connected to a computation unit characterized by Wi-Fi communication capabilities. AR training scenario and the corresponding experimental results are part of the thesis.O envelhecimento causa um aumento das necessidades dos serviços de fisioterapia, com aumento dos custos associados a longos perĂ­odos de reabilitação. Os mĂ©todos tradicionais de reabilitação dependem da avaliação subjetiva de fisioterapeutas sem registo automatizado de dados de treino. Com o principal objetivo de superar os problemas do mĂ©todo tradicional e melhorar a eficiĂȘncia da reabilitação, Ă© utilizada a RA (Realidade Aumentada), que representa uma tecnologia promissora, que fornece uma interação imersiva com objetos reais e virtuais. Os dispositivos de RA sĂŁo capazes de garantir uma postura correta do corpo de capturar e verificar o ambiente real, o que levou a um nĂșmero crescente de aplicaçÔes de RA focados na reabilitação fĂ­sica. Neste projeto, Ă© apresentada uma plataforma de RA, utilizada para materializar um plano de reabilitação fĂ­sica para pacientes que sofreram AVC. O treino de marcha Ă© uma parte significativa da reabilitação fĂ­sica para pacientes com AVC. A RA apresenta-se como uma solução promissora para a avaliação do treino, fornecendo informaçÔes aos pacientes e aos profissionais de fisioterapia sobre os exercĂ­cios a serem realizados e os resultados alcançados. Como parte deste projeto, uma aplicação iOS foi desenvolvida na plataforma Unity 3D. Esta aplicação fornece aos pacientes um ambiente imersivo que combina objetos reais e virtuais. A interface de RA Ă© materializada por um iPhone montado num suporte de cabeça do utilizador, assim como um conjunto de sensores sem fios para medição de parĂąmetros fisiolĂłgicos e de movimento. A posição e a velocidade do paciente sĂŁo registadas por um tapete inteligente que inclui sensores capacitivos conectados a uma unidade de computação, caracterizada por comunicação via Wi-Fi. O cenĂĄrio de treino em RA e os resultados experimentais correspondentes fazem parte desta dissertação

    Activity monitoring and behaviour analysis using RGB-depth sensors and wearable devices for ambient assisted living applications

    Get PDF
    Nei paesi sviluppati, la percentuale delle persone anziane Ăš in costante crescita. Questa condizione Ăš dovuta ai risultati raggiunti nel capo medico e nel miglioramento della qualitĂ  della vita. Con l'avanzare dell'etĂ , le persone sono piĂč soggette a malattie correlate con l'invecchiamento. Esse sono classificabili in tre gruppi: fisiche, sensoriali e mentali. Come diretta conseguenza dell'aumento della popolazione anziana ci sarĂ  quindi una crescita dei costi nel sistema sanitario, che dovrĂ  essere affrontata dalla UE nei prossimi anni. Una possibile soluzione a questa sfida Ăš l'utilizzo della tecnologia. Questo concetto Ăš chiamato Ambient Assisted living (AAL) e copre diverse aree quali ad esempio il supporto alla mobilitĂ , la cura delle persone, la privacy, la sicurezza e le interazioni sociali. In questa tesi differenti sensori saranno utilizzati per mostrare, attraverso diverse applicazioni, le potenzialitĂ  della tecnologia nel contesto dell'AAL. In particolare verranno utilizzate le telecamere RGB-profonditĂ  e sensori indossabili. La prima applicazione sfrutta una telecamera di profonditĂ  per monitorare la distanza sensore-persona al fine di individuare possibili cadute. Un'implementazione alternativa usa l'informazione di profonditĂ  sincronizzata con l'accelerazione fornita da un dispositivo indossabile per classificare le attivitĂ  realizzate dalla persona in due gruppi: Activity Daily Living e cadute. Al fine di valutare il fattore di rischio caduta negli anziani, la seconda applicazione usa la stessa configurazione descritta in precedenza per misurare i parametri cinematici del corpo durante un test clinico chiamato Timed Up and Go. Infine, la terza applicazione monitora i movimenti della persona durante il pasto per valutare se il soggetto sta seguendo una dieta corretta. L'informazione di profonditĂ  viene sfruttata per riconoscere particolari azioni mentre quella RGB per classificare oggetti di interesse come bicchieri o piatti presenti sul tavolo.Nowadays, in the developed countries, the percentage of the elderly is growing. This situation is a consequence of improvements in people's quality life and developments in the medical field. Because of ageing, people have higher probability to be affected by age-related diseases classified in three main groups physical, perceptual and mental. Therefore, the direct consequence is a growing of healthcare system costs and a not negligible financial sustainability issue which the EU will have to face in the next years. One possible solution to tackle this challenge is exploiting the advantages provided by the technology. This paradigm is called Ambient Assisted Living (AAL) and concerns different areas, such as mobility support, health and care, privacy and security, social environment and communication. In this thesis, two different type of sensors will be used to show the potentialities of the technology in the AAL scenario. RGB-Depth cameras and wearable devices will be studied to design affordable solutions. The first one is a fall detection system that uses the distance information between the target and the camera to monitor people inside the covered area. The application will trigger an alarm when recognizes a fall. An alternative implementation of the same solution synchronizes the information provided by a depth camera and a wearable device to classify the activities performed by the user in two groups: Activity Daily Living and fall. In order to assess the fall risk in the elderly, the second proposed application uses the previous sensors configuration to measure kinematic parameters of the body during a specific assessment test called Timed Up and Go. Finally, the third application monitor's the user's movements during an intake activity. Especially, the drinking gesture can be recognized by the system using the depth information to track the hand movements whereas the RGB stream is exploited to classify important objects placed on a table

    Human gait assessment using a 3D marker-less multimodal motion capture system

    Get PDF
    Gait analysis is the measurement, processing and systematic interpretation of biomechanical parameters that characterize human locomotion. It supports the identification of movement limitations and development of rehabilitation procedures. Accurate Gait analysis is important in sports analysis, medical field, and rehabilitation. Although Gait analysis is performed in several laboratories in many countries, there are many issues such as: (i) the high cost of precise Motion Capture systems; (ii) the scarcity of qualified personnel to operate them; (iii) expertise required to interpret their results; (iv) space requirements to install and store these systems; as well as difficulties related to the measurement protocols of each system; (vi) limited availability (vii) and the use of markers can be a barrier for some clinical use cases (e.g. patients recovering from orthopedics surgeries). In this work, we present a low cost and more accessible system based on the integration of a Multiple Microsoft Kinect sensors and multiple Shimmer inertial sensors to capture human Gait. The novel multimodal system combines data from inertial and 3D depth cameras and outputs spatiotemporal Gait variables. A comparison of this system with the VICON system (the gold standard in Motion Capture) was performed. Our relatively low-cost marker-less multimodal motion generates a complete 360-degree skeleton view. We compare our system with the VICON via gait spatiotemporal variables: Gait cycle time, stride time, Gait length (distance between two strides), stride length, and velocity. The system was also evaluated with knee and hip joint angles measurement accuracy. The results show high correlation for spatiotemporal variables and joint angles inside the 95% bootstrap prediction when compared with VICON

    Real-time Assessment and Visual Feedback for Patient Rehabilitation Using Inertial Sensors

    Get PDF
    Rehabilitation exercises needs have been continuously increasing and have been projected to increase in future as well based on its demand for aging population, recovering from surgery, injury and illness and the living and working lifestyle of the people. This research aims to tackle one of the most critical issues faced by the exercise administers-Adherence or Non-Adherence to Home Exercise problems especially has been a significant issue resulting in extensive research on the psychological analysis of people involved. In this research, a solution is provided to increase the adherence of such programs through an automated real-time assessment with constant visual feedback providing a game like an environment and recording the same for analysis purposes. Inertial sensors like Accelerometer and Gyroscope has been used to implement a rule-based framework for human activity recognition for measuring the ankle joint angle. This system is also secure as it contains only the recordings of the data and the avatar that could be live fed or recorded for the treatment analysis purposes which could save time and cost. The results obtained after testing on four healthy human subjects shows that with proper implementation of rule parameters, good quality and quantity of the exercises can be assessed in real time

    Human action recognition and mobility assessment in smart environments with RGB-D sensors

    Get PDF
    openQuesta attivitĂ  di ricerca Ăš focalizzata sullo sviluppo di algoritmi e soluzioni per ambienti intelligenti sfruttando sensori RGB e di profonditĂ . In particolare, gli argomenti affrontati fanno riferimento alla valutazione della mobilitĂ  di un soggetto e al riconoscimento di azioni umane. Riguardo il primo tema, l'obiettivo Ăš quello di implementare algoritmi per l'estrazione di parametri oggettivi che possano supportare la valutazione di test di mobilitĂ  svolta da personale sanitario. Il primo algoritmo proposto riguarda l'estrazione di sei joints sul piano sagittale utilizzando i dati di profonditĂ  forniti dal sensore Kinect. La precisione in termini di stima degli angoli di busto e ginocchio nella fase di sit-to-stand viene valutata considerando come riferimento un sistema stereofotogrammetrico basato su marker. Un secondo algoritmo viene proposto per facilitare la realizzazione del test in ambiente domestico e per consentire l'estrazione di un maggior numero di parametri dall'esecuzione del test Timed Up and Go. I dati di Kinect vengono combinati con quelli di un accelerometro attraverso un algoritmo di sincronizzazione, costituendo un setup che puĂČ essere utilizzato anche per altre applicazioni che possono beneficiare dell'utilizzo congiunto di dati RGB, profonditĂ  ed inerziali. Vengono quindi proposti algoritmi di rilevazione della caduta che sfruttano la stessa configurazione del Timed Up and Go test. Per quanto riguarda il secondo argomento affrontato, l'obiettivo Ăš quello di effettuare la classificazione di azioni che possono essere compiute dalla persona all'interno di un ambiente domestico. Vengono quindi proposti due algoritmi di riconoscimento attivitĂ  i quali utilizzano i joints dello scheletro di Kinect e sfruttano un SVM multiclasse per il riconoscimento di azioni appartenenti a dataset pubblicamente disponibili, raggiungendo risultati confrontabili con lo stato dell'arte rispetto ai dataset CAD-60, KARD, MSR Action3D.This research activity is focused on the development of algorithms and solutions for smart environments exploiting RGB and depth sensors. In particular, the addressed topics refer to mobility assessment of a subject and to human action recognition. Regarding the first topic, the goal is to implement algorithms for the extraction of objective parameters that can support the assessment of mobility tests performed by healthcare staff. The first proposed algorithm regards the extraction of six joints on the sagittal plane using depth data provided by Kinect sensor. The accuracy in terms of estimation of torso and knee angles in the sit-to-stand phase is evaluated considering a marker-based stereometric system as a reference. A second algorithm is proposed to simplify the test implementation in home environment and to allow the extraction of a greater number of parameters from the execution of the Timed Up and Go test. Kinect data are combined with those of an accelerometer through a synchronization algorithm constituting a setup that can be used also for other applications that benefit from the joint usage of RGB, depth and inertial data. Fall detection algorithms exploiting the same configuration of the Timed Up and Go test are therefore proposed. Regarding the second topic addressed, the goal is to perform the classification of human actions that can be carried out in home environment. Two algorithms for human action recognition are therefore proposed, which exploit skeleton joints of Kinect and a multi-class SVM for the recognition of actions belonging to publicly available datasets, achieving results comparable with the state of the art in the datasets CAD-60, KARD, MSR Action3D.INGEGNERIA DELL'INFORMAZIONECippitelli, EneaCippitelli, Ene

    Human action recognition and mobility assessment in smart environments with RGB-D sensors

    Get PDF
    Questa attivitĂ  di ricerca Ăš focalizzata sullo sviluppo di algoritmi e soluzioni per ambienti intelligenti sfruttando sensori RGB e di profonditĂ . In particolare, gli argomenti affrontati fanno riferimento alla valutazione della mobilitĂ  di un soggetto e al riconoscimento di azioni umane. Riguardo il primo tema, l'obiettivo Ăš quello di implementare algoritmi per l'estrazione di parametri oggettivi che possano supportare la valutazione di test di mobilitĂ  svolta da personale sanitario. Il primo algoritmo proposto riguarda l'estrazione di sei joints sul piano sagittale utilizzando i dati di profonditĂ  forniti dal sensore Kinect. La precisione in termini di stima degli angoli di busto e ginocchio nella fase di sit-to-stand viene valutata considerando come riferimento un sistema stereofotogrammetrico basato su marker. Un secondo algoritmo viene proposto per facilitare la realizzazione del test in ambiente domestico e per consentire l'estrazione di un maggior numero di parametri dall'esecuzione del test Timed Up and Go. I dati di Kinect vengono combinati con quelli di un accelerometro attraverso un algoritmo di sincronizzazione, costituendo un setup che puĂČ essere utilizzato anche per altre applicazioni che possono beneficiare dell'utilizzo congiunto di dati RGB, profonditĂ  ed inerziali. Vengono quindi proposti algoritmi di rilevazione della caduta che sfruttano la stessa configurazione del Timed Up and Go test. Per quanto riguarda il secondo argomento affrontato, l'obiettivo Ăš quello di effettuare la classificazione di azioni che possono essere compiute dalla persona all'interno di un ambiente domestico. Vengono quindi proposti due algoritmi di riconoscimento attivitĂ  i quali utilizzano i joints dello scheletro di Kinect e sfruttano un SVM multiclasse per il riconoscimento di azioni appartenenti a dataset pubblicamente disponibili, raggiungendo risultati confrontabili con lo stato dell'arte rispetto ai dataset CAD-60, KARD, MSR Action3D.This research activity is focused on the development of algorithms and solutions for smart environments exploiting RGB and depth sensors. In particular, the addressed topics refer to mobility assessment of a subject and to human action recognition. Regarding the first topic, the goal is to implement algorithms for the extraction of objective parameters that can support the assessment of mobility tests performed by healthcare staff. The first proposed algorithm regards the extraction of six joints on the sagittal plane using depth data provided by Kinect sensor. The accuracy in terms of estimation of torso and knee angles in the sit-to-stand phase is evaluated considering a marker-based stereometric system as a reference. A second algorithm is proposed to simplify the test implementation in home environment and to allow the extraction of a greater number of parameters from the execution of the Timed Up and Go test. Kinect data are combined with those of an accelerometer through a synchronization algorithm constituting a setup that can be used also for other applications that benefit from the joint usage of RGB, depth and inertial data. Fall detection algorithms exploiting the same configuration of the Timed Up and Go test are therefore proposed. Regarding the second topic addressed, the goal is to perform the classification of human actions that can be carried out in home environment. Two algorithms for human action recognition are therefore proposed, which exploit skeleton joints of Kinect and a multi-class SVM for the recognition of actions belonging to publicly available datasets, achieving results comparable with the state of the art in the datasets CAD-60, KARD, MSR Action3D

    An inertial motion capture framework for constructing body sensor networks

    Get PDF
    Motion capture is the process of measuring and subsequently reconstructing the movement of an animated object or being in virtual space. Virtual reconstructions of human motion play an important role in numerous application areas such as animation, medical science, ergonomics, etc. While optical motion capture systems are the industry standard, inertial body sensor networks are becoming viable alternatives due to portability, practicality and cost. This thesis presents an innovative inertial motion capture framework for constructing body sensor networks through software environments, smartphones and web technologies. The first component of the framework is a unique inertial motion capture software environment aimed at providing an improved experimentation environment, accompanied by programming scaffolding and a driver development kit, for users interested in studying or engineering body sensor networks. The software environment provides a bespoke 3D engine for kinematic motion visualisations and a set of tools for hardware integration. The software environment is used to develop the hardware behind a prototype motion capture suit focused on low-power consumption and hardware-centricity. Additional inertial measurement units, which are available commercially, are also integrated to demonstrate the functionality the software environment while providing the framework with additional sources for motion data. The smartphone is the most ubiquitous computing technology and its worldwide uptake has prompted many advances in wearable inertial sensing technologies. Smartphones contain gyroscopes, accelerometers and magnetometers, a combination of sensors that is commonly found in inertial measurement units. This thesis presents a mobile application that investigates whether the smartphone is capable of inertial motion capture by constructing a novel omnidirectional body sensor network. This thesis proposes a novel use for web technologies through the development of the Motion Cloud, a repository and gateway for inertial data. Web technologies have the potential to replace motion capture file formats with online repositories and to set a new standard for how motion data is stored. From a single inertial measurement unit to a more complex body sensor network, the proposed architecture is extendable and facilitates the integration of any inertial hardware configuration. The Motion Cloud’s data can be accessed through an application-programming interface or through a web portal that provides users with the functionality for visualising and exporting the motion data

    Fusion of Unobtrusive Sensing Solutions for Sprained Ankle Rehabilitation Exercises Monitoring in Home Environments

    Get PDF
    The ability to monitor Sprained Ankle Rehabilitation Exercises (SPAREs) in home environments can help therapists ascertain if exercises have been performed as prescribed. Whilst wearable devices have been shown to provide advantages such as high accuracy and precision during monitoring activities, disadvantages such as limited battery life and users’ inability to remember to charge and wear the devices are often the challenges for their usage. In addition, video cameras, which are notable for high frame rates and granularity, are not privacy-friendly. Therefore, this paper proposes the use and fusion of privacy-friendly and Unobtrusive Sensing Solutions (USSs) for data collection and processing during SPAREs in home environments. The present work aims to monitor SPAREs such as dorsiflexion, plantarflexion, inversion, and eversion using radar and thermal sensors. The main contributions of this paper include (i) privacy-friendly monitoring of SPAREs in a home environment, (ii) fusion of SPAREs data from homogeneous and heterogeneous USSs, and (iii) analysis and comparison of results from single, homogeneous, and heterogeneous USSs. Experimental results indicated the advantages of using heterogeneous USSs and data fusion. Cluster-based analysis of data gleaned from the sensors indicated an average classification accuracy of 96.9% with Neural Network, AdaBoost, and Support Vector Machine, amongst others

    An Integrated Multi-Sensor Approach for the Remote Monitoring of Parkinson’s Disease

    Get PDF
    The increment of the prevalence of neurological diseases due to the trend in population aging demands for new strategies in disease management. In Parkinson's disease (PD), these strategies should aim at improving diagnosis accuracy and frequency of the clinical follow-up by means of decentralized cost-effective solutions. In this context, a system suitable for the remote monitoring of PD subjects is presented. It consists of the integration of two approaches investigated in our previous works, each one appropriate for the movement analysis of specific parts of the body: low-cost optical devices for the upper limbs and wearable sensors for the lower ones. The system performs the automated assessments of six motor tasks of the unified Parkinson's disease rating scale, and it is equipped with a gesture-based human machine interface designed to facilitate the user interaction and the system management. The usability of the system has been evaluated by means of standard questionnaires, and the accuracy of the automated assessment has been verified experimentally. The results demonstrate that the proposed solution represents a substantial improvement in PD assessment respect to the former two approaches treated separately, and a new example of an accurate, feasible and cost-effective mean for the decentralized management of PD
    • 

    corecore