21 research outputs found

    Applying vision-based pose estimation in a telerehabilitation application

    Get PDF
    In this paper, an augmented reality mirror application using vision-based human pose detection based on vision-based pose detection called ExerCam is presented. ExerCam does not need any special controllers or sensors for its operation, as it works with a simple RGB camera (webcam type), which makes the application totally accessible and low cost. This application also has a system for managing patients, tasks and games via the web, with which a therapist can manage their patients in a ubiquitous and totally remote way. As a final conclusion of the article, it can be inferred that the application developed is viable as a telerehabilitation tool, as it has the resource of a task mode for the calculation of the range of motion (ROM) and, on the other hand, a game mode to encourage patients to improve their performance during the therapy, with positive results obtained in this aspect.This work has been partially supported by the Spanish State Research Agency, under project grant AriSe2: FINe (Ref. PID2020-116329GB-C22/AEI/10.13039/501100011033), the Spanish Government projects “GenoVision” (Ref. BFU2017-88300-C2-2-R), the “Research Programme for Groups of Scientific Excellence in the Region of Murcia” of the Seneca Foundation (Agency for Science and Technology in the Region of Murcia—19895/GERM/15) and HORECOV2 (Ref: 2I20SAE00082_HORECOV2) of Fondo Europeo de Desarrollo Regional (FEDER)

    Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

    Get PDF
    abstract: Parents fulfill a pivotal role in early childhood development of social and communication skills. In children with autism, the development of these skills can be delayed. Applied behavioral analysis (ABA) techniques have been created to aid in skill acquisition. Among these, pivotal response treatment (PRT) has been empirically shown to foster improvements. Research into PRT implementation has also shown that parents can be trained to be effective interventionists for their children. The current difficulty in PRT training is how to disseminate training to parents who need it, and how to support and motivate practitioners after training. Evaluation of the parents’ fidelity to implementation is often undertaken using video probes that depict the dyadic interaction occurring between the parent and the child during PRT sessions. These videos are time consuming for clinicians to process, and often result in only minimal feedback for the parents. Current trends in technology could be utilized to alleviate the manual cost of extracting data from the videos, affording greater opportunities for providing clinician created feedback as well as automated assessments. The naturalistic context of the video probes along with the dependence on ubiquitous recording devices creates a difficult scenario for classification tasks. The domain of the PRT video probes can be expected to have high levels of both aleatory and epistemic uncertainty. Addressing these challenges requires examination of the multimodal data along with implementation and evaluation of classification algorithms. This is explored through the use of a new dataset of PRT videos. The relationship between the parent and the clinician is important. The clinician can provide support and help build self-efficacy in addition to providing knowledge and modeling of treatment procedures. Facilitating this relationship along with automated feedback not only provides the opportunity to present expert feedback to the parent, but also allows the clinician to aid in personalizing the classification models. By utilizing a human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the classification models by providing additional labeled samples. This will allow the system to improve classification and provides a person-centered approach to extracting multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems

    Automated early prediction of cerebral palsy: interpretable pose-based assessment for the identification of abnormal infant movements

    Get PDF
    Cerebral Palsy (CP) is currently the most common chronic motor disability occurring in infants, affecting an estimated 1 in every 400 babies born in the UK each year. Techniques which can lead to an early diagnosis of CP have therefore been an active area of research, with some very promising results using tools such as the General Movements Assessment (GMA). By using video recordings of infant motor activity, assessors are able to classify an infant’s neurodevelopmental status based upon specific characteristics of the observed infant movement. However, these assessments are heavily dependent upon the availability of highly skilled assessors. As such, we explore the feasibility of the automated prediction of CP using machine learning techniques to analyse infant motion. We examine the viability of several new pose-based features for the analysis and classification of infant body movement from video footage. We extensively evaluate the effectiveness of the extracted features using several proposed classification frameworks, and also reimplement the leading methods from the literature for direct comparison using shared datasets to establish a new state-of-the-art. We introduce the RVI-38 video dataset, which we use to further inform the design, and establish the robustness of our proposed complementary pose-based motion features. Finally, given the importance of explainable AI for clinical applications, we propose a new classification framework which also incorporates a visualisation module to further aid with interpretability. Our proposed pose-based framework segments extracted features to detect movement abnormalities spatiotemporally, allowing us to identify and highlight body-parts exhibiting abnormal movement characteristics, subsequently providing intuitive feedback to clinicians. We suggest that our novel pose-based methods offer significant benefits over other approaches in both the analysis of infant motion and explainability of the associated data. Our engineered features, which are directly mapped to the assessment criteria in the clinical guidelines, demonstrate state-of-the-art performance across multiple datasets; and our feature extraction methods and associated visualisations significantly improve upon model interpretability

    Video-based sports activity recognition for children

    Get PDF
    Large-scale action recognition datasets contain more instances of adults than children, and models trained with these datasets may not perform well for children. In this study, we test if current state-of-the-art deep learning models have some systemic bias in decoding the activity being performed by an adult or a child. We collected a sports activity recognition dataset with child and adult labels. We fine-tuned a state-of-the-art action recognition classifier on two different segments of our dataset, containing only children or only adults. Our results show that cross-condition generalization performance of the resulting networks is not similar. Our results indicate that the child-specific segment is more complex to generalize than the adult-specific segment. The dataset and the code are made publicly available

    ENGAGEMENT RECOGNITION WITHIN ROBOT-ASSISTED AUTISM THERAPY

    Get PDF
    Autism is a neurodevelopmental condition typically diagnosed in early childhood, which is characterized by challenges in using language and understanding abstract concepts, effective communication, and building social relationships. The utilization of social robots in autism therapy represents a significant area of research. An increasing number of studies explore the use of social robots as mediators between therapists and children diagnosed with autism. Assessing a child’s engagement can enhance the effectiveness of robot-assisted interventions while also providing an objective metric for later analysis. The thesis begins with a comprehensive multiple-session study involving 11 children diagnosed with autism and Attention Deficit Hyperactivity Disorder (ADHD). This study employs multi-purposeful robot activities designed to target various aspects of autism. The study yields both quantitative and qualitative findings based on four behavioural measures that were obtained from video recordings of the sessions. Statistical analysis reveals that adaptive therapy provides a longer engagement duration as compared to non-adaptive therapy sessions. Engagement is a key element in evaluating autism therapy sessions that are needed for acquiring knowledge and practising new skills necessary for social and cognitive development. With the aim to create an engagement recognition model, this research work also involves the manual labelling of collected videos to generate a QAMQOR dataset. This dataset comprises 194 therapy sessions, spanning over 48 hours of video recordings. Additionally, it includes demographic information for 34 children diagnosed with ASD. It is important to note that videos of 23 children with autism were collected from previous records. The QAMQOR dataset was evaluated using standard machine learning and deep learning approaches. However, the development of an accurate engagement recognition model remains challenging due to the unique personal characteristics of each individual with autism. In order to address this challenge and improve recognition accuracy, this PhD work also explores a data-driven model using transfer learning techniques. Our study contributes to addressing the challenges faced by machine learning in recognizing engagement among children with autism, such as diverse engagement activities, multimodal raw data, and the resources and time required for data collection. This research work contributes to the growing field of using social robots in autism therapy by illuminating an understanding of the importance of adaptive therapy and providing valuable insights into engagement recognition. The findings serve as a foundation for further advancements in personalized and effective robot-assisted interventions for individuals with autism

    Serious Games and Mixed Reality Applications for Healthcare

    Get PDF
    Virtual reality (VR) and augmented reality (AR) have long histories in the healthcare sector, offering the opportunity to develop a wide range of tools and applications aimed at improving the quality of care and efficiency of services for professionals and patients alike. The best-known examples of VR–AR applications in the healthcare domain include surgical planning and medical training by means of simulation technologies. Techniques used in surgical simulation have also been applied to cognitive and motor rehabilitation, pain management, and patient and professional education. Serious games are ones in which the main goal is not entertainment, but a crucial purpose, ranging from the acquisition of knowledge to interactive training.These games are attracting growing attention in healthcare because of their several benefits: motivation, interactivity, adaptation to user competence level, flexibility in time, repeatability, and continuous feedback. Recently, healthcare has also become one of the biggest adopters of mixed reality (MR), which merges real and virtual content to generate novel environments, where physical and digital objects not only coexist, but are also capable of interacting with each other in real time, encompassing both VR and AR applications.This Special Issue aims to gather and publish original scientific contributions exploring opportunities and addressing challenges in both the theoretical and applied aspects of VR–AR and MR applications in healthcare

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
    corecore