197 research outputs found

    The Design and Evaluation of a Kinect-Based Postural Symmetry Assessment and Training System

    Get PDF
    abstract: The increased risk of falling and the worse ability to perform other daily physical activities in the elderly cause concern about monitoring and correcting basic everyday movement. In this thesis, a Kinect-based system was designed to assess one of the most important factors in balance control of human body when doing Sit-to-Stand (STS) movement: the postural symmetry in mediolateral direction. A symmetry score, calculated by the data obtained from a Kinect RGB-D camera, was proposed to reflect the mediolateral postural symmetry degree and was used to drive a real-time audio feedback designed in MAX/MSP to help users adjust themselves to perform their movement in a more symmetrical way during STS. The symmetry score was verified by calculating the Spearman correlation coefficient with the data obtained from Inertial Measurement Unit (IMU) sensor and got an average value at 0.732. Five healthy adults, four males and one female, with normal balance abilities and with no musculoskeletal disorders, were selected to participate in the experiment and the results showed that the low-cost Kinect-based system has the potential to train users to perform a more symmetrical movement in mediolateral direction during STS movement.Dissertation/ThesisMasters Thesis Electrical Engineering 201

    Activity monitoring and behaviour analysis using RGB-depth sensors and wearable devices for ambient assisted living applications

    Get PDF
    Nei paesi sviluppati, la percentuale delle persone anziane è in costante crescita. Questa condizione è dovuta ai risultati raggiunti nel capo medico e nel miglioramento della qualità della vita. Con l'avanzare dell'età, le persone sono più soggette a malattie correlate con l'invecchiamento. Esse sono classificabili in tre gruppi: fisiche, sensoriali e mentali. Come diretta conseguenza dell'aumento della popolazione anziana ci sarà quindi una crescita dei costi nel sistema sanitario, che dovrà essere affrontata dalla UE nei prossimi anni. Una possibile soluzione a questa sfida è l'utilizzo della tecnologia. Questo concetto è chiamato Ambient Assisted living (AAL) e copre diverse aree quali ad esempio il supporto alla mobilità, la cura delle persone, la privacy, la sicurezza e le interazioni sociali. In questa tesi differenti sensori saranno utilizzati per mostrare, attraverso diverse applicazioni, le potenzialità della tecnologia nel contesto dell'AAL. In particolare verranno utilizzate le telecamere RGB-profondità e sensori indossabili. La prima applicazione sfrutta una telecamera di profondità per monitorare la distanza sensore-persona al fine di individuare possibili cadute. Un'implementazione alternativa usa l'informazione di profondità sincronizzata con l'accelerazione fornita da un dispositivo indossabile per classificare le attività realizzate dalla persona in due gruppi: Activity Daily Living e cadute. Al fine di valutare il fattore di rischio caduta negli anziani, la seconda applicazione usa la stessa configurazione descritta in precedenza per misurare i parametri cinematici del corpo durante un test clinico chiamato Timed Up and Go. Infine, la terza applicazione monitora i movimenti della persona durante il pasto per valutare se il soggetto sta seguendo una dieta corretta. L'informazione di profondità viene sfruttata per riconoscere particolari azioni mentre quella RGB per classificare oggetti di interesse come bicchieri o piatti presenti sul tavolo.Nowadays, in the developed countries, the percentage of the elderly is growing. This situation is a consequence of improvements in people's quality life and developments in the medical field. Because of ageing, people have higher probability to be affected by age-related diseases classified in three main groups physical, perceptual and mental. Therefore, the direct consequence is a growing of healthcare system costs and a not negligible financial sustainability issue which the EU will have to face in the next years. One possible solution to tackle this challenge is exploiting the advantages provided by the technology. This paradigm is called Ambient Assisted Living (AAL) and concerns different areas, such as mobility support, health and care, privacy and security, social environment and communication. In this thesis, two different type of sensors will be used to show the potentialities of the technology in the AAL scenario. RGB-Depth cameras and wearable devices will be studied to design affordable solutions. The first one is a fall detection system that uses the distance information between the target and the camera to monitor people inside the covered area. The application will trigger an alarm when recognizes a fall. An alternative implementation of the same solution synchronizes the information provided by a depth camera and a wearable device to classify the activities performed by the user in two groups: Activity Daily Living and fall. In order to assess the fall risk in the elderly, the second proposed application uses the previous sensors configuration to measure kinematic parameters of the body during a specific assessment test called Timed Up and Go. Finally, the third application monitor's the user's movements during an intake activity. Especially, the drinking gesture can be recognized by the system using the depth information to track the hand movements whereas the RGB stream is exploited to classify important objects placed on a table

    Building an Understanding of Human Activities in First Person Video using Fuzzy Inference

    Get PDF
    Activities of Daily Living (ADL’s) are the activities that people perform every day in their home as part of their typical routine. The in-home, automated monitoring of ADL’s has broad utility for intelligent systems that enable independent living for the elderly and mentally or physically disabled individuals. With rising interest in electronic health (e-Health) and mobile health (m-Health) technology, opportunities abound for the integration of activity monitoring systems into these newer forms of healthcare. In this dissertation we propose a novel system for describing ’s based on video collected from a wearable camera. Most in-home activities are naturally defined by interaction with objects. We leverage these object-centric activity definitions to develop a set of rules for a Fuzzy Inference System (FIS) that uses video features and the identification of objects to identify and classify activities. Further, we demonstrate that the use of FIS enhances the reliability of the system and provides enhanced explainability and interpretability of results over popular machine-learning classifiers due to the linguistic nature of fuzzy systems

    Automatic Fall Risk Detection based on Imbalanced Data

    Get PDF
    In recent years, the declining birthrate and aging population have gradually brought countries into an ageing society. Regarding accidents that occur amongst the elderly, falls are an essential problem that quickly causes indirect physical loss. In this paper, we propose a pose estimation-based fall detection algorithm to detect fall risks. We use body ratio, acceleration and deflection as key features instead of using the body keypoints coordinates. Since fall data is rare in real-world situations, we train and evaluate our approach in a highly imbalanced data setting. We assess not only different imbalanced data handling methods but also different machine learning algorithms. After oversampling on our training data, the K-Nearest Neighbors (KNN) algorithm achieves the best performance. The F1 scores for three different classes, Normal, Fall, and Lying, are 1.00, 0.85 and 0.96, which is comparable to previous research. The experiment shows that our approach is more interpretable with the key feature from skeleton information. Moreover, it can apply in multi-people scenarios and has robustness on medium occlusion

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Robotic biofeedback for post-stroke gait rehabilitation: a scoping review

    Get PDF
    This review aims to recommend directions for future research on robotic biofeedback towards prompt post-stroke gait rehabilitation by investigating the technical and clinical specifications of biofeedback systems (BSs), including the complementary use with assistive devices and/or physiotherapist-oriented cues. A literature search was conducted from January 2019 to September 2022 on Cochrane, Embase, PubMed, PEDro, Scopus, and Web of Science databases. Data regarding technical (sensors, biofeedback parameters, actuators, control strategies, assistive devices, physiotherapist-oriented cues) and clinical (participants’ characteristics, protocols, outcome measures, BSs’ effects) specifications of BSs were extracted from the relevant studies. A total of 31 studies were reviewed, which included 660 stroke survivors. Most studies reported visual biofeedback driven according to the comparison between real-time kinetic or spatiotemporal data from wearable sensors and a threshold. Most studies achieved statistically significant improvements on sensor-based and clinical outcomes between at least two evaluation time points. Future research should study the effectiveness of using multiple wearable sensors and actuators to provide personalized biofeedback to users with multiple sensorimotor deficits. There is space to explore BSs complementing different assistive devices and physiotherapist-oriented cues according to their needs. There is a lack of randomized-controlled studies to explore post-stroke stage, mental and sensory effects of BSs.This work has been supported in part by the FEDER Funds through the COMPETE 2020—Programa Operacional Competitividade e Internacionalização (POCI) and P2020 with the Reference Project SmartOs Grant POCI-01-0247-FEDER-039868, and by FCT national funds, under the national support to R&D units grant, through the reference project UIDB/04436/2020 and UIDP/04436/2020, under scholarship reference 2020.05709.BD, and under Stimulus of Scientific Employment with the grant 2020.03393.CEECIND

    Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena

    Get PDF
    Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform

    Gaze Strategies During Obstacle Negotiation in the Presence of Distractors: a Virtual Reality Based Assessment

    Get PDF
    Vision actively influences the gait, and older adults show altered visual patterns compared to their younger counterpart when approaching challenges in the travel path. Attentional distractors influence the motor strategies during obstacle crossing. In rehabilitation, treadmill and virtual reality (VR) are commonly used to train gait. The VR technology allows for repeatable, safe and full variable control tasks and is well accepted by the patients. The gaze behavior when watching at videos of a first perspective walking is similar to that adopted in the real world and the training of the gaze is effective to improve the accuracy of the gait. Therefore, the integration of the gaze monitoring in existing VR-based gait rehabilitation protocols could both give insights in the visuo-motor strategy adopted in challenging conditions and could improve the gait rehabilitation effectiveness. The research presented in this thesis consisted in the assessment of the visuo-motor strategies of young and older adults during obstacles crossing in a projected VR environment with visual distractors. The first part of the project consisted in testing a set-up allowing for such an assessment with a remote eye-tracking system. The tested remote eye-tracker was demonstrated to be reliable for treadmill walking, with gaze measurements comparable to those got for the static conditions. The second part of the project aimed at studying the effect of distractors during obstacle avoidance, by making young and older adults walking on a treadmill while navigating a purposely depicted VR world. Young and older adults showed different visual scan patterns of the scene. This study highlights the visuo-motor strategy of young and older adults in a set-up similar to those recently used in gait rehabilitation and showed that the two populations are distinguishable by the adopted visual strategy. This and further investigations are important to address better the gait rehabilitation interventions

    Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena

    Get PDF
    Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform
    • …
    corecore