352 research outputs found

    An IoT based Virtual Coaching System (VSC) for Assisting Activities of Daily Life

    Get PDF
    Nowadays aging of the population is becoming one of the main concerns of theworld. It is estimated that the number of people aged over 65 will increase from 461million to 2 billion in 2050. This substantial increment in the elderly population willhave significant consequences in the social and health care system. Therefore, in thecontext of Ambient Intelligence (AmI), the Ambient Assisted Living (AAL) has beenemerging as a new research area to address problems related to the aging of the population. AAL technologies based on embedded devices have demonstrated to be effectivein alleviating the social- and health-care issues related to the continuous growing of theaverage age of the population. Many smart applications, devices and systems have beendeveloped to monitor the health status of elderly, substitute them in the accomplishment of activities of the daily life (especially in presence of some impairment or disability),alert their caregivers in case of necessity and help them in recognizing risky situations.Such assistive technologies basically rely on the communication and interaction be-tween body sensors, smart environments and smart devices. However, in such contextless effort has been spent in designing smart solutions for empowering and supportingthe self-efficacy of people with neurodegenerative diseases and elderly in general. Thisthesis fills in the gap by presenting a low-cost, non intrusive, and ubiquitous VirtualCoaching System (VCS) to support people in the acquisition of new behaviors (e.g.,taking pills, drinking water, finding the right key, avoiding motor blocks) necessary tocope with needs derived from a change in their health status and a degradation of theircognitive capabilities as they age. VCS is based on the concept of extended mind intro-duced by Clark and Chalmers in 1998. They proposed the idea that objects within theenvironment function as a part of the mind. In my revisiting of the concept of extendedmind, the VCS is composed of a set of smart objects that exploit the Internet of Things(IoT) technology and machine learning-based algorithms, in order to identify the needsof the users and react accordingly. In particular, the system exploits smart tags to trans-form objects commonly used by people (e.g., pillbox, bottle of water, keys) into smartobjects, it monitors their usage according to their needs, and it incrementally guidesthem in the acquisition of new behaviors related to their needs. To implement VCS, thisthesis explores different research directions and challenges. First of all, it addresses thedefinition of a ubiquitous, non-invasive and low-cost indoor monitoring architecture byexploiting the IoT paradigm. Secondly, it deals with the necessity of developing solu-tions for implementing coaching actions and consequently monitoring human activitiesby analyzing the interaction between people and smart objects. Finally, it focuses on the design of low-cost localization systems for indoor environment, since knowing theposition of a person provides VCS with essential information to acquire information onperformed activities and to prevent risky situations. In the end, the outcomes of theseresearch directions have been integrated into a healthcare application scenario to imple-ment a wearable system that prevents freezing of gait in people affected by Parkinson\u2019sDisease

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Activity monitoring and behaviour analysis using RGB-depth sensors and wearable devices for ambient assisted living applications

    Get PDF
    Nei paesi sviluppati, la percentuale delle persone anziane è in costante crescita. Questa condizione è dovuta ai risultati raggiunti nel capo medico e nel miglioramento della qualità della vita. Con l'avanzare dell'età, le persone sono più soggette a malattie correlate con l'invecchiamento. Esse sono classificabili in tre gruppi: fisiche, sensoriali e mentali. Come diretta conseguenza dell'aumento della popolazione anziana ci sarà quindi una crescita dei costi nel sistema sanitario, che dovrà essere affrontata dalla UE nei prossimi anni. Una possibile soluzione a questa sfida è l'utilizzo della tecnologia. Questo concetto è chiamato Ambient Assisted living (AAL) e copre diverse aree quali ad esempio il supporto alla mobilità, la cura delle persone, la privacy, la sicurezza e le interazioni sociali. In questa tesi differenti sensori saranno utilizzati per mostrare, attraverso diverse applicazioni, le potenzialità della tecnologia nel contesto dell'AAL. In particolare verranno utilizzate le telecamere RGB-profondità e sensori indossabili. La prima applicazione sfrutta una telecamera di profondità per monitorare la distanza sensore-persona al fine di individuare possibili cadute. Un'implementazione alternativa usa l'informazione di profondità sincronizzata con l'accelerazione fornita da un dispositivo indossabile per classificare le attività realizzate dalla persona in due gruppi: Activity Daily Living e cadute. Al fine di valutare il fattore di rischio caduta negli anziani, la seconda applicazione usa la stessa configurazione descritta in precedenza per misurare i parametri cinematici del corpo durante un test clinico chiamato Timed Up and Go. Infine, la terza applicazione monitora i movimenti della persona durante il pasto per valutare se il soggetto sta seguendo una dieta corretta. L'informazione di profondità viene sfruttata per riconoscere particolari azioni mentre quella RGB per classificare oggetti di interesse come bicchieri o piatti presenti sul tavolo.Nowadays, in the developed countries, the percentage of the elderly is growing. This situation is a consequence of improvements in people's quality life and developments in the medical field. Because of ageing, people have higher probability to be affected by age-related diseases classified in three main groups physical, perceptual and mental. Therefore, the direct consequence is a growing of healthcare system costs and a not negligible financial sustainability issue which the EU will have to face in the next years. One possible solution to tackle this challenge is exploiting the advantages provided by the technology. This paradigm is called Ambient Assisted Living (AAL) and concerns different areas, such as mobility support, health and care, privacy and security, social environment and communication. In this thesis, two different type of sensors will be used to show the potentialities of the technology in the AAL scenario. RGB-Depth cameras and wearable devices will be studied to design affordable solutions. The first one is a fall detection system that uses the distance information between the target and the camera to monitor people inside the covered area. The application will trigger an alarm when recognizes a fall. An alternative implementation of the same solution synchronizes the information provided by a depth camera and a wearable device to classify the activities performed by the user in two groups: Activity Daily Living and fall. In order to assess the fall risk in the elderly, the second proposed application uses the previous sensors configuration to measure kinematic parameters of the body during a specific assessment test called Timed Up and Go. Finally, the third application monitor's the user's movements during an intake activity. Especially, the drinking gesture can be recognized by the system using the depth information to track the hand movements whereas the RGB stream is exploited to classify important objects placed on a table

    Efficient Human Activity Recognition in Large Image and Video Databases

    Get PDF
    Vision-based human action recognition has attracted considerable interest in recent research for its applications to video surveillance, content-based search, healthcare, and interactive games. Most existing research deals with building informative feature descriptors, designing efficient and robust algorithms, proposing versatile and challenging datasets, and fusing multiple modalities. Often, these approaches build on certain conventions such as the use of motion cues to determine video descriptors, application of off-the-shelf classifiers, and single-factor classification of videos. In this thesis, we deal with important but overlooked issues such as efficiency, simplicity, and scalability of human activity recognition in different application scenarios: controlled video environment (e.g.~indoor surveillance), unconstrained videos (e.g.~YouTube), depth or skeletal data (e.g.~captured by Kinect), and person images (e.g.~Flicker). In particular, we are interested in answering questions like (a) is it possible to efficiently recognize human actions in controlled videos without temporal cues? (b) given that the large-scale unconstrained video data are often of high dimension low sample size (HDLSS) nature, how to efficiently recognize human actions in such data? (c) considering the rich 3D motion information available from depth or motion capture sensors, is it possible to recognize both the actions and the actors using only the motion dynamics of underlying activities? and (d) can motion information from monocular videos be used for automatically determining saliency regions for recognizing actions in still images

    Vision-based representation and recognition of human activities in image sequences

    Get PDF
    Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2013von Samy Sadek Mohamed Bakhee

    Vision-based human action recognition using machine learning techniques

    Get PDF
    The focus of this thesis is on automatic recognition of human actions in videos. Human action recognition is defined as automatic understating of what actions occur in a video performed by a human. This is a difficult problem due to the many challenges including, but not limited to, variations in human shape and motion, occlusion, cluttered background, moving cameras, illumination conditions, and viewpoint variations. To start with, The most popular and prominent state-of-the-art techniques are reviewed, evaluated, compared, and presented. Based on the literature review, these techniques are categorized into handcrafted feature-based and deep learning-based approaches. The proposed action recognition framework is then based on these handcrafted and deep learning based techniques, which are then adopted throughout the thesis by embedding novel algorithms for action recognition, both in the handcrafted and deep learning domains. First, a new method based on handcrafted approach is presented. This method addresses one of the major challenges known as “viewpoint variations” by presenting a novel feature descriptor for multiview human action recognition. This descriptor employs the region-based features extracted from the human silhouette. The proposed approach is quite simple and achieves state-of-the-art results without compromising the efficiency of the recognition process which shows its suitability for real-time applications. Second, two innovative methods are presented based on deep learning approach, to go beyond the limitations of handcrafted approach. The first method is based on transfer learning using pre-trained deep learning model as a source architecture to solve the problem of human action recognition. It is experimentally confirmed that deep Convolutional Neural Network model already trained on large-scale annotated dataset is transferable to action recognition task with limited training dataset. The comparative analysis also confirms its superior performance over handcrafted feature-based methods in terms of accuracy on same datasets. The second method is based on unsupervised deep learning-based approach. This method employs Deep Belief Networks (DBNs) with restricted Boltzmann machines for action recognition in unconstrained videos. The proposed method automatically extracts suitable feature representation without any prior knowledge using unsupervised deep learning model. The effectiveness of the proposed method is confirmed with high recognition results on a challenging UCF sports dataset. Finally, the thesis is concluded with important discussions and research directions in the area of human action recognition

    Smart Sensing Technologies for Personalised Coaching

    Get PDF
    People living in both developed and developing countries face serious health challenges related to sedentary lifestyles. It is therefore essential to find new ways to improve health so that people can live longer and can age well. With an ever-growing number of smart sensing systems developed and deployed across the globe, experts are primed to help coach people toward healthier behaviors. The increasing accountability associated with app- and device-based behavior tracking not only provides timely and personalized information and support but also gives us an incentive to set goals and to do more. This book presents some of the recent efforts made towards automatic and autonomous identification and coaching of troublesome behaviors to procure lasting, beneficial behavioral changes

    Machine learning approaches to video activity recognition: from computer vision to signal processing

    Get PDF
    244 p.La investigación presentada se centra en técnicas de clasificación para dos tareas diferentes, aunque relacionadas, de tal forma que la segunda puede ser considerada parte de la primera: el reconocimiento de acciones humanas en vídeos y el reconocimiento de lengua de signos.En la primera parte, la hipótesis de partida es que la transformación de las señales de un vídeo mediante el algoritmo de Patrones Espaciales Comunes (CSP por sus siglas en inglés, comúnmente utilizado en sistemas de Electroencefalografía) puede dar lugar a nuevas características que serán útiles para la posterior clasificación de los vídeos mediante clasificadores supervisados. Se han realizado diferentes experimentos en varias bases de datos, incluyendo una creada durante esta investigación desde el punto de vista de un robot humanoide, con la intención de implementar el sistema de reconocimiento desarrollado para mejorar la interacción humano-robot.En la segunda parte, las técnicas desarrolladas anteriormente se han aplicado al reconocimiento de lengua de signos, pero además de ello se propone un método basado en la descomposición de los signos para realizar el reconocimiento de los mismos, añadiendo la posibilidad de una mejor explicabilidad. El objetivo final es desarrollar un tutor de lengua de signos capaz de guiar a los usuarios en el proceso de aprendizaje, dándoles a conocer los errores que cometen y el motivo de dichos errores

    Personalized data analytics for internet-of-things-based health monitoring

    Get PDF
    The Internet-of-Things (IoT) has great potential to fundamentally alter the delivery of modern healthcare, enabling healthcare solutions outside the limits of conventional clinical settings. It can offer ubiquitous monitoring to at-risk population groups and allow diagnostic care, preventive care, and early intervention in everyday life. These services can have profound impacts on many aspects of health and well-being. However, this field is still at an infancy stage, and the use of IoT-based systems in real-world healthcare applications introduces new challenges. Healthcare applications necessitate satisfactory quality attributes such as reliability and accuracy due to their mission-critical nature, while at the same time, IoT-based systems mostly operate over constrained shared sensing, communication, and computing resources. There is a need to investigate this synergy between the IoT technologies and healthcare applications from a user-centered perspective. Such a study should examine the role and requirements of IoT-based systems in real-world health monitoring applications. Moreover, conventional computing architecture and data analytic approaches introduced for IoT systems are insufficient when used to target health and well-being purposes, as they are unable to overcome the limitations of IoT systems while fulfilling the needs of healthcare applications. This thesis aims to address these issues by proposing an intelligent use of data and computing resources in IoT-based systems, which can lead to a high-level performance and satisfy the stringent requirements. For this purpose, this thesis first delves into the state-of-the-art IoT-enabled healthcare systems proposed for in-home and in-hospital monitoring. The findings are analyzed and categorized into different domains from a user-centered perspective. The selection of home-based applications is focused on the monitoring of the elderly who require more remote care and support compared to other groups of people. In contrast, the hospital-based applications include the role of existing IoT in patient monitoring and hospital management systems. Then, the objectives and requirements of each domain are investigated and discussed. This thesis proposes personalized data analytic approaches to fulfill the requirements and meet the objectives of IoT-based healthcare systems. In this regard, a new computing architecture is introduced, using computing resources in different layers of IoT to provide a high level of availability and accuracy for healthcare services. This architecture allows the hierarchical partitioning of machine learning algorithms in these systems and enables an adaptive system behavior with respect to the user's condition. In addition, personalized data fusion and modeling techniques are presented, exploiting multivariate and longitudinal data in IoT systems to improve the quality attributes of healthcare applications. First, a real-time missing data resilient decision-making technique is proposed for health monitoring systems. The technique tailors various data resources in IoT systems to accurately estimate health decisions despite missing data in the monitoring. Second, a personalized model is presented, enabling variations and event detection in long-term monitoring systems. The model evaluates the sleep quality of users according to their own historical data. Finally, the performance of the computing architecture and the techniques are evaluated in this thesis using two case studies. The first case study consists of real-time arrhythmia detection in electrocardiography signals collected from patients suffering from cardiovascular diseases. The second case study is continuous maternal health monitoring during pregnancy and postpartum. It includes a real human subject trial carried out with twenty pregnant women for seven months
    corecore