52,632 research outputs found

    A Data-driven, Piecewise Linear Approach to Modeling Human Motions

    Get PDF
    Motion capture, or mocap, is a prevalent technique for capturing and analyzing human articulations. Nowadays, mocap data are becoming one of the primary sources of realistic human motions for computer animation as well as education, training, sports medicine, video games, and special effects in movies. As more and more applications rely on high-quality mocap data and huge amounts of mocap data become available, there are imperative needs for more effective and robust motion capture techniques, better ways of organizing motion databases, as well as more efficient methods to compress motion sequences. I propose a data-driven, segment-based, piecewise linear modeling approach to exploit the redundancy and local linearity exhibited by human motions and describe human motions with a small number of parameters. This approach models human motions with a collection of low-dimensional local linear models. I first segment motion sequences into subsequences, i.e. motion segments, of simple behaviors. Motion segments of similar behaviors are then grouped together and modeled with a unique local linear model. I demonstrate this approach's utility in four challenging driving problems: estimating human motions from a reduced marker set; missing marker estimation; motion retrieval; and motion compression

    Learning Articulated Motions From Visual Demonstration

    Full text link
    Many functional elements of human homes and workplaces consist of rigid components which are connected through one or more sliding or rotating linkages. Examples include doors and drawers of cabinets and appliances; laptops; and swivel office chairs. A robotic mobile manipulator would benefit from the ability to acquire kinematic models of such objects from observation. This paper describes a method by which a robot can acquire an object model by capturing depth imagery of the object as a human moves it through its range of motion. We envision that in future, a machine newly introduced to an environment could be shown by its human user the articulated objects particular to that environment, inferring from these "visual demonstrations" enough information to actuate each object independently of the user. Our method employs sparse (markerless) feature tracking, motion segmentation, component pose estimation, and articulation learning; it does not require prior object models. Using the method, a robot can observe an object being exercised, infer a kinematic model incorporating rigid, prismatic and revolute joints, then use the model to predict the object's motion from a novel vantage point. We evaluate the method's performance, and compare it to that of a previously published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN: 978-0-9923747-0-

    Estimating Body Segment Orientation by Applying Inertial and Magnetic Sensing Near Ferromagnetic Materials

    Get PDF
    Inertial and magnetic sensors are very suitable for ambulatory monitoring of human posture and movements. However, ferromagnetic materials near the sensor disturb the local magnetic field and, therefore, the orientation estimation. A Kalman-based fusion algorithm was used to obtain dynamic orientations and to minimize the effect of magnetic disturbances. This paper compares the orientation output of the sensor fusion using three-dimensional inertial and magnetic sensors against a laboratory bound opto-kinetic system (Vicon) in a simulated work environment. With the tested methods, the difference between the optical reference system and the output of the algorithm was 2.6deg root mean square (rms) when no metal was near the sensor module. Near a large metal object instant errors up to 50deg were measured when no compensation was applied. Using a magnetic disturbance model, the error reduced significantly to 3.6deg rms

    Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect

    Full text link
    Microsoft Kinect camera and its skeletal tracking capabilities have been embraced by many researchers and commercial developers in various applications of real-time human movement analysis. In this paper, we evaluate the accuracy of the human kinematic motion data in the first and second generation of the Kinect system, and compare the results with an optical motion capture system. We collected motion data in 12 exercises for 10 different subjects and from three different viewpoints. We report on the accuracy of the joint localization and bone length estimation of Kinect skeletons in comparison to the motion capture. We also analyze the distribution of the joint localization offsets by fitting a mixture of Gaussian and uniform distribution models to determine the outliers in the Kinect motion data. Our analysis shows that overall Kinect 2 has more robust and more accurate tracking of human pose as compared to Kinect 1.Comment: 10 pages, IEEE International Conference on Healthcare Informatics 2015 (ICHI 2015
    corecore