3,875 research outputs found

    Motion capture sensing techniques used in human upper limb motion: a review

    Get PDF
    Purpose Motion capture system (MoCap) has been used in measuring the human body segments in several applications including film special effects, health care, outer-space and under-water navigation systems, sea-water exploration pursuits, human machine interaction and learning software to help teachers of sign language. The purpose of this paper is to help the researchers to select specific MoCap system for various applications and the development of new algorithms related to upper limb motion. Design/methodology/approach This paper provides an overview of different sensors used in MoCap and techniques used for estimating human upper limb motion. Findings The existing MoCaps suffer from several issues depending on the type of MoCap used. These issues include drifting and placement of Inertial sensors, occlusion and jitters in Kinect, noise in electromyography signals and the requirement of a well-structured, calibrated environment and time-consuming task of placing markers in multiple camera systems. Originality/value This paper outlines the issues and challenges in MoCaps for measuring human upper limb motion and provides an overview on the techniques to overcome these issues and challenges

    Real-time computation of distance to dynamic obstacles with multiple depth sensors

    Get PDF
    We present an efficient method to evaluate distances between dynamic obstacles and a number of points of interests (e.g., placed on the links of a robot) when using multiple depth cameras. A depth-space oriented discretization of the Cartesian space is introduced that represents at best the workspace monitored by a depth camera, including occluded points. A depth grid map can be initialized off line from the arrangement of the multiple depth cameras, and its peculiar search characteristics allows fusing on line the information given by the multiple sensors in a very simple and fast way. The real-time performance of the proposed approach is shown by means of collision avoidance experiments where two Kinect sensors monitor a human-robot coexistence task

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Analyzing interference between RGB-D cameras for human motion tracking

    Get PDF
    Multi-camera RGB-D systems are becoming popular as sensor setups in Computer Vision applications but they are prone to cause interference between them, compromising their accuracy. This paper extends previous works on the analysis of the noise introduced by interference with new and more realistic camera configurations and different brands of devices. As expected, the detected noise increases as distance and angle grows, becoming worse when interference is present. Finally, we evaluate the effectiveness of the proposed solutions of using DC vibration motors to mitigate them. The results of this study are being used to assess the effect of interference when applying these setups to human motion tracking.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Plan Propio de Investigación de la UMA. Junta de Andalucía, proyecto TEP2012-53

    3-D Hand Pose Estimation from Kinect's Point Cloud Using Appearance Matching

    Full text link
    We present a novel appearance-based approach for pose estimation of a human hand using the point clouds provided by the low-cost Microsoft Kinect sensor. Both the free-hand case, in which the hand is isolated from the surrounding environment, and the hand-object case, in which the different types of interactions are classified, have been considered. The hand-object case is clearly the most challenging task having to deal with multiple tracks. The approach proposed here belongs to the class of partial pose estimation where the estimated pose in a frame is used for the initialization of the next one. The pose estimation is obtained by applying a modified version of the Iterative Closest Point (ICP) algorithm to synthetic models to obtain the rigid transformation that aligns each model with respect to the input data. The proposed framework uses a "pure" point cloud as provided by the Kinect sensor without any other information such as RGB values or normal vector components. For this reason, the proposed method can also be applied to data obtained from other types of depth sensor, or RGB-D camera
    corecore