790 research outputs found

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation

    Real-time Assessment and Visual Feedback for Patient Rehabilitation Using Inertial Sensors

    Get PDF
    Rehabilitation exercises needs have been continuously increasing and have been projected to increase in future as well based on its demand for aging population, recovering from surgery, injury and illness and the living and working lifestyle of the people. This research aims to tackle one of the most critical issues faced by the exercise administers-Adherence or Non-Adherence to Home Exercise problems especially has been a significant issue resulting in extensive research on the psychological analysis of people involved. In this research, a solution is provided to increase the adherence of such programs through an automated real-time assessment with constant visual feedback providing a game like an environment and recording the same for analysis purposes. Inertial sensors like Accelerometer and Gyroscope has been used to implement a rule-based framework for human activity recognition for measuring the ankle joint angle. This system is also secure as it contains only the recordings of the data and the avatar that could be live fed or recorded for the treatment analysis purposes which could save time and cost. The results obtained after testing on four healthy human subjects shows that with proper implementation of rule parameters, good quality and quantity of the exercises can be assessed in real time

    Recognizing specific errors in human physical exercise performance with Microsoft Kinect

    Get PDF
    The automatic assessment of human physical activity performance is useful for a number of beneficial systems including in-home rehabilitation monitoring systems and Reactive Virtual Trainers (RVTs). RVTs have the potential to replace expensive personal trainers to promote healthy activity and help teach correct form to prevent injury. Additionally, unobtrusive sensor technologies for human tracking, especially those that incorporate depth sensing such as Microsoft Kinect, have become effective, affordable, and commonplace. The work of this thesis contributes towards the development of RVT systems by using RGB-D and tracked skeletal data collected with Microsoft Kinect to assess human performance of physical exercises. I collected data from eight volunteers performing three exercises: jumping jacks, arm circles, and arm curls. I labeled each exercise repetition as either correct or one or more of a select number of predefined erroneous forms. I trained a statistical model using the labeled samples and developed a system that recognizes specific structural and temporal errors in a test set of unlabeled samples. I obtained classification accuracies for multiple implementations and assess the effectiveness of the use of various features of the skeletal data as well as various prediction models

    Motion and emotion estimation for robotic autism intervention.

    Get PDF
    Robots have recently emerged as a novel approach to treating autism spectrum disorder (ASD). A robot can be programmed to interact with children with ASD in order to reinforce positive social skills in a non-threatening environment. In prior work, robots were employed in interaction sessions with ASD children, but their sensory and learning abilities were limited, while a human therapist was heavily involved in “puppeteering” the robot. The objective of this work is to create the next-generation autism robot that includes several new interactive and decision-making capabilities that are not found in prior technology. Two of the main features that this robot would need to have is the ability to quantitatively estimate the patient’s motion performance and to correctly classify their emotions. This would allow for the potential diagnosis of autism and the ability to help autistic patients practice their skills. Therefore, in this thesis, we engineered components for a human-robot interaction system and confirmed them in experiments with the robots Baxter and Zeno, the sensors Empatica E4 and Kinect, and, finally, the open-source pose estimation software OpenPose. The Empatica E4 wristband is a wearable device that collects physiological measurements in real time from a test subject. Measurements were collected from ASD patients during human-robot interaction activities. Using this data and labels of attentiveness from a trained coder, a classifier was developed that provides a prediction of the patient’s level of engagement. The classifier outputs this prediction to a robot or supervising adult, allowing for decisions during intervention activities to keep the attention of the patient with autism. The CMU Perceptual Computing Lab’s OpenPose software package enables body, face, and hand tracking using an RGB camera (e.g., web camera) or an RGB-D camera (e.g., Microsoft Kinect). Integrating OpenPose with a robot allows the robot to collect information on user motion intent and perform motion imitation. In this work, we developed such a teleoperation interface with the Baxter robot. Finally, a novel algorithm, called Segment-based Online Dynamic Time Warping (SoDTW), and metric are proposed to help in the diagnosis of ASD. Social Robot Zeno, a childlike robot developed by Hanson Robotics, was used to test this algorithm and metric. Using the proposed algorithm, it is possible to classify a subject’s motion into different speeds or to use the resulting SoDTW score to evaluate the subject’s abilities

    Occlusion-Aware Multi-View Reconstruction of Articulated Objects for Manipulation

    Get PDF
    The goal of this research is to develop algorithms using multiple views to automatically recover complete 3D models of articulated objects in unstructured environments and thereby enable a robotic system to facilitate further manipulation of those objects. First, an algorithm called Procrustes-Lo-RANSAC (PLR) is presented. Structure-from-motion techniques are used to capture 3D point cloud models of an articulated object in two different configurations. Procrustes analysis, combined with a locally optimized RANSAC sampling strategy, facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. The algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Second, with such a resulting articulated model, a robotic system is then able to manipulate the object either along its joint axes at a specified grasp point in order to exercise its degrees of freedom or move its end effector to a particular position even if the point is not visible in the current view. This is one of the main advantages of the occlusion-aware approach, because the models capture all sides of the object meaning that the robot has knowledge of parts of the object that are not visible in the current view. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of real-world objects containing both revolute and prismatic joints. Third, we improve the proposed approach by using a RGBD sensor (Microsoft Kinect) that yield a depth value for each pixel immediately by the sensor itself rather than requiring correspondence to establish depth. KinectFusion algorithm is applied to produce a single high-quality, geometrically accurate 3D model from which rigid links of the object are segmented and aligned, allowing the joint axes to be estimated using the geometric approach. The improved algorithm does not require artificial markers attached to objects, yields much denser 3D models and reduces the computation time

    Design and Development of ReMoVES Platform for Motion and Cognitive Rehabilitation

    Get PDF
    Exergames have recently gained popularity and scientific reliability in the field of assistive computing technology for human well-being. The ReMoVES platform, developed by the author, provides motor and cognitive exergames to be performed by elderly or disabled people, in conjunction with traditional rehabilitation. Data acquisition during the exercise takes place through Microsoft Kinect, Leap Motion and touchscreen monitor. The therapist is provided with feedback on patients' activity over time in order to assess their weakness and correct inaccurate movement attitudes. This work describes the technical characteristics of the ReMoVES platform, designed to be used by multiple locations such as rehabilitation centers or the patient's home, while providing a centralized data collection server. The system includes 15 exergames, developed from scratch by the author, with the aim of promoting motor and cognitive activity through patient entertainment. The ReMoVES platform differs from similar solutions for the automatic data processing features in support of the therapist. Three methods are presented: based on classic data analysis, on Support Vector Machine classification, and finally on Recurrent Neural Networks. The results describe how it is possible to discern patient gaming sessions with adequate performance from those with incorrect movements with an accuracy of up to 92%. The system has been used with real patients and a data database is made available to the scientific community. The aim is to encourage the dissemination of such data to lay the foundations for a comparison between similar studies
    • …
    corecore