421 research outputs found

    On the determination of human affordances

    Get PDF

    Vision-based Human Fall Detection Systems using Deep Learning: A Review

    Full text link
    Human fall is one of the very critical health issues, especially for elders and disabled people living alone. The number of elder populations is increasing steadily worldwide. Therefore, human fall detection is becoming an effective technique for assistive living for those people. For assistive living, deep learning and computer vision have been used largely. In this review article, we discuss deep learning (DL)-based state-of-the-art non-intrusive (vision-based) fall detection techniques. We also present a survey on fall detection benchmark datasets. For a clear understanding, we briefly discuss different metrics which are used to evaluate the performance of the fall detection systems. This article also gives a future direction on vision-based human fall detection techniques

    A protected discharge facility for the elderly: design and validation of a working proof-of-concept

    Get PDF
    With the increasing share of elderly population worldwide, the need for assistive technologies to support clinicians in monitoring their health conditions is becoming more and more relevant. As a quantitative tool, geriatricians recently proposed the notion of frail elderly, which rapidly became a key element of clinical practices for the estimation of well-being in aging population. The evaluation of frailty is commonly based on self-reported outcomes and occasional physicians evaluations, and may therefore contain biased results. Another important aspect in the elderly population is hospitalization as a risk factor for patient\u2019s well being and public costs. Hospitalization is the main cause of functional decline, especially in older adults. The reduction of hospitalization time may allow an improvement of elderly health conditions and a reduction of hospital costs. Furthermore, a gradual transition from a hospital environment to a home-like one, can contribute to the weaning of the patient from a condition of hospitalization to a condition of discharge to his home. The advent of new technologies allows for the design and implementation of smart environments to monitor elderly health status and activities, fulfilling all the requirements of health and safety of the patients. From these starting points, in this thesis I present data-driven methodologies to automatically evaluate one of the main aspects contributing to the frailty estimation, i.e., the motility of the subject. First I will describe a model of protected discharge facility, realized in collaboration and within the E.O. Ospedali Galliera (Genoa, Italy), where patients can be monitored by a system of sensors while physicians and nurses have the opportunity to monitor them remotely. This sensorised facility is being developed to assist elderly users after they have been dismissed from the hospital and before they are ready to go back home, with the perspective of coaching them towards a healthy lifestyle. The facility is equipped with a variety of sensors (vision, depth, ambient and wearable sensors and medical devices), but in my thesis I primarily focus on RGB-D sensors and present visual computing tools to automatically estimate motility features. I provide an extensive system assessment I carried out onthree different experimental sessions with help of young as well as healthy aging volunteers. The results I present are in agreement with the assessment manually performed by physicians, showing the potential capability of my approach to complement current protocols of evaluation

    Learned Spatio-Temporal Texture Descriptors for RGB-D Human Action Recognition

    Get PDF
    Due to the recent arrival of Kinect, action recognition with depth images has attracted researchers' wide attentions and various descriptors have been proposed, where Local Binary Patterns (LBP) texture descriptors possess the properties of appearance invariance. However, the LBP and its variants are most artificially-designed, demanding engineers' strong prior knowledge and not discriminative enough for recognition tasks. To this end, this paper develops compact spatio-temporal texture descriptors, i.e. 3D-compact LBP (3D-CLBP) and local depth patterns (3D-CLDP), for color and depth videos in the light of compact binary face descriptor learning in face recognition. Extensive experiments performed on three standard datasets, 3D Online Action, MSR Action Pairs and MSR Daily Activity 3D, demonstrate that our method is superior to most comparative methods in respects of performance and can capture spatial-temporal texture cues in videos

    The Design and Evaluation of a Kinect-Based Postural Symmetry Assessment and Training System

    Get PDF
    abstract: The increased risk of falling and the worse ability to perform other daily physical activities in the elderly cause concern about monitoring and correcting basic everyday movement. In this thesis, a Kinect-based system was designed to assess one of the most important factors in balance control of human body when doing Sit-to-Stand (STS) movement: the postural symmetry in mediolateral direction. A symmetry score, calculated by the data obtained from a Kinect RGB-D camera, was proposed to reflect the mediolateral postural symmetry degree and was used to drive a real-time audio feedback designed in MAX/MSP to help users adjust themselves to perform their movement in a more symmetrical way during STS. The symmetry score was verified by calculating the Spearman correlation coefficient with the data obtained from Inertial Measurement Unit (IMU) sensor and got an average value at 0.732. Five healthy adults, four males and one female, with normal balance abilities and with no musculoskeletal disorders, were selected to participate in the experiment and the results showed that the low-cost Kinect-based system has the potential to train users to perform a more symmetrical movement in mediolateral direction during STS movement.Dissertation/ThesisMasters Thesis Electrical Engineering 201

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    BERT for Activity Recognition Using Sequences of Skeleton Features and Data Augmentation with GAN

    Get PDF
    Recently, the scientific community has placed great emphasis on the recognition of human activity, especially in the area of health and care for the elderly. There are already practical applications of activity recognition and unusual conditions that use body sensors such as wrist-worn devices or neck pendants. These relatively simple devices may be prone to errors, might be uncomfortable to wear, might be forgotten or not worn, and are unable to detect more subtle conditions such as incorrect postures. Therefore, other proposed methods are based on the use of images and videos to carry out human activity recognition, even in open spaces and with multiple people. However, the resulting increase in the size and complexity involved when using image data requires the use of the most recent advanced machine learning and deep learning techniques. This paper presents an approach based on deep learning with attention to the recognition of activities from multiple frames. Feature extraction is performed by estimating the pose of the human skeleton, and classification is performed using a neural network based on Bidirectional Encoder Representation of Transformers (BERT). This algorithm was trained with the UP-Fall public dataset, generating more balanced artificial data with a Generative Adversarial Neural network (GAN), and evaluated with real data, outperforming the results of other activity recognition methods using the same dataset.This research was supported in part by the Chilean Research and Development Agency (ANID) under Project FONDECYT 1191188, The National University of Distance Education under Projects 2021V/-TAJOV/00 and OPTIVAC 096-034091 2021V/PUNED/008, and the Ministry of Science and Innovation of Spain under Project PID2019-108377RB-C32
    • …
    corecore