1,251 research outputs found

    Automatic Labeled LiDAR Data Generation based on Precise Human Model

    Full text link
    Following improvements in deep neural networks, state-of-the-art networks have been proposed for human recognition using point clouds captured by LiDAR. However, the performance of these networks strongly depends on the training data. An issue with collecting training data is labeling. Labeling by humans is necessary to obtain the ground truth label; however, labeling requires huge costs. Therefore, we propose an automatic labeled data generation pipeline, for which we can change any parameters or data generation environments. Our approach uses a human model named Dhaiba and a background of Miraikan and consequently generated realistic artificial data. We present 500k+ data generated by the proposed pipeline. This paper also describes the specification of the pipeline and data details with evaluations of various approaches.Comment: Accepted at ICRA201

    Multi-set canonical correlation analysis for 3D abnormal gait behaviour recognition based on virtual sample generation

    Get PDF
    Small sample dataset and two-dimensional (2D) approach are challenges to vision-based abnormal gait behaviour recognition (AGBR). The lack of three-dimensional (3D) structure of the human body causes 2D based methods to be limited in abnormal gait virtual sample generation (VSG). In this paper, 3D AGBR based on VSG and multi-set canonical correlation analysis (3D-AGRBMCCA) is proposed. First, the unstructured point cloud data of gait are obtained by using a structured light sensor. A 3D parametric body model is then deformed to fit the point cloud data, both in shape and posture. The features of point cloud data are then converted to a high-level structured representation of the body. The parametric body model is used for VSG based on the estimated body pose and shape data. Symmetry virtual samples, pose-perturbation virtual samples and various body-shape virtual samples with multi-views are generated to extend the training samples. The spatial-temporal features of the abnormal gait behaviour from different views, body pose and shape parameters are then extracted by convolutional neural network based Long Short-Term Memory model network. These are projected onto a uniform pattern space using deep learning based multi-set canonical correlation analysis. Experiments on four publicly available datasets show the proposed system performs well under various conditions

    Abnormal gait detection by means of LSTM

    Get PDF
    This article presents a system focused on the detection of three types of abnormal walk patterns caused by neurological diseases, specifically Parkinsonian gait, Hemiplegic gait, and Spastic Diplegic gait. A Kinect sensor is used to extract the Skeleton from a person during its walk, to then calculate four types of bases that generate different sequences from the 25 points of articulations that the Skeleton gives. For each type of calculated base, a recurrent neural network (RNN) is trained, specifically a Long short-term memory (LSTM). In addition, there is a graphical user interface that allows the acquisition, training, and testing of trained networks. Of the four trained networks, 98.1% accuracy is obtained with the database that was calculated with the distance of each point provided by the Skeleton to the Hip-Center point

    Markerless Gait Classification Employing 3D IR-UWB Physiological Motion Sensing

    Get PDF
    Human gait refers to the propulsion achieved by the effort of human limbs, a reflex progression resulting from the rhythmic reciprocal bursts of flexor and extensor activity. Several quantitative models are followed by health professionals to diagnose gait abnormality. Marker-based gait quantification is considered a gold standard by the research and health communities. It reconstructs motion in 3D and provides parameters to measure gait. But, it is an expensive and intrusive technique, limited to soft tissue artefact, prone to incorrect marker positioning, and skin sensitivity problems. Hence, markerless, swiftly deployable, non-intrusive, camera-less prototypes would be a game changing possibility, and an example is proposed here. This paper illustrates a 3D gait motion analyser employing impulse radio ultra-wide band (IR-UWB) wireless technology. The prototype can measure 3D motion and determine quantitative parameters considering anatomical reference planes. Knee angles have been calculated from the gait by applying vector algebra. Simultaneously, the model has been corroborated with the popular markerless camera based 3D motion capturing system, the Kinect sensor. Bland and Altman (B&A) statistics has been applied to the proposed prototype and Kinect sensor results to verify the measurement agreement. Finally, the proposed prototype has been incorporated with popular supervised machine learning such as, k-nearest neighbour (kNN), support vector machine (SVM) and the deep learning technique deep neural multilayer perceptron (DMLP) network to automatically recognize gait abnormalities, with promising results presented

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation

    Fall prediction using behavioural modelling from sensor data in smart homes.

    Get PDF
    The number of methods for identifying potential fall risk is growing as the rate of elderly fallers continues to rise in the UK. Assessments for identifying risk of falling are usually performed in hospitals and other laboratory environments, however these are costly and cause inconvenience for the subject and health services. Replacing these intrusive testing methods with a passive in-home monitoring solution would provide a less time-consuming and cheaper alternative. As sensors become more readily available, machine learning models can be applied to the large amount of data they produce. This can support activity recognition, falls detection, prediction and risk determination. In this review, the growing complexity of sensor data, the required analysis, and the machine learning techniques used to determine risk of falling are explored. The current research on using passive monitoring in the home is discussed, while the viability of active monitoring using vision-based and wearable sensors is considered. Methods of fall detection, prediction and risk determination are then compared

    Special issue on smart interactions in cyber-physical systems: Humans, agents, robots, machines, and sensors

    Get PDF
    In recent years, there has been increasing interaction between humans and non‐human systems as we move further beyond the industrial age, the information age, and as we move into the fourth‐generation society. The ability to distinguish between human and non‐human capabilities has become more difficult to discern. Given this, it is common that cyber‐physical systems (CPSs) are rapidly integrated with human functionality, and humans have become increasingly dependent on CPSs to perform their daily routines.The constant indicators of a future where human and non‐human CPSs relationships consistently interact and where they allow each other to navigate through a set of non‐trivial goals is an interesting and rich area of research, discovery, and practical work area. The evidence of con- vergence has rapidly gained clarity, demonstrating that we can use complex combinations of sensors, artificial intelli- gence, and data to augment human life and knowledge. To expand the knowledge in this area, we should explain how to model, design, validate, implement, and experiment with these complex systems of interaction, communication, and networking, which will be developed and explored in this special issue. This special issue will include ideas of the future that are relevant for understanding, discerning, and developing the relationship between humans and non‐ human CPSs as well as the practical nature of systems that facilitate the integration between humans, agents, robots, machines, and sensors (HARMS).Fil: Kim, Donghan. Kyung Hee University;Fil: Rodriguez, Sebastian Alberto. Universidad Tecnológica Nacional; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán; ArgentinaFil: Matson, Eric T.. Purdue University; Estados UnidosFil: Kim, Gerard Jounghyun. Korea University
    corecore