198,167 research outputs found

    Human motion reconstruction using wearable accelerometers

    Get PDF
    We address the problem of capturing human motion in scenarios where the use of a traditional optical motion capture system is impractical. Such scenarios are relatively commonplace, such as in large spaces, outdoors or at competitive sporting events, where the limitations of such systems are apparent: the small physical area where motion capture can be done and the lack of robustness to lighting changes and occlusions. In this paper, we advocate the use of body-worn wearable wireless accelerometers for reconstructing human motion and to this end we outline a system that is more portable than traditional optical motion capture systems, whilst producing naturalistic motion. Additionally, if information on the person's root position is available, an extended version of our algorithm can use this information to correct positional drift

    Computing motion in the primate's visual system

    Get PDF
    Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle ‘the final optical flow should be as smooth as possible’ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory

    Automatic Identification of Inertial Sensors on the Human Body Segments

    Get PDF
    In the last few years, inertial sensors (accelerometers and gyroscopes) in combination with magnetic sensors was proven to be a suitable ambulatory alternative to traditional human motion tracking systems based on optical position measurements. While accurate full 6 degrees of freedom information is available [1], these inertial sensor systems still have some drawbacks, e.g. each sensor has to be attached to a certain predefined body segment. The goal of this project is to develop a ‘Click-On-and-Play’ ambulatory 3D human motion capture system, i.e. a set of (wireless) inertial sensors which can be placed on the human body at arbitrary positions, because they will be identified and localized automatically

    Measuring Behavior using Motion Capture

    Get PDF
    Motion capture systems, using optical, magnetic or mechanical sensors are now widely used to record\ud human motion. Motion capture provides us with precise measurements of human motion at a very high\ud recording frequency and accuracy, resulting in a massive amount of movement data on several joints of the\ud body or markers of the face. But how do we make sure that we record the right things? And how can we\ud correctly interpret the recorded data?\ud In this multi-disciplinary symposium, speakers from the field of biomechanics, computer animation, human\ud computer interaction and behavior science come together to discus their methods to both record motion and\ud to extract useful properties from the data. In these fields, the construction of human movement models from\ud motion capture data is the focal point, although the application of such models differs per field. Such\ud models can be used to generate and evaluate highly adaptable and believable animation on virtual\ud characters in computer animation, to explore the details of gesture interaction in Human Computer\ud Interaction applications, to identify patterns related to affective states or to find biomechanical properties of\ud human movement

    Deep Motion Features for Visual Tracking

    Full text link
    Robust visual tracking is a challenging computer vision problem, with many real-world applications. Most existing approaches employ hand-crafted appearance features, such as HOG or Color Names. Recently, deep RGB features extracted from convolutional neural networks have been successfully applied for tracking. Despite their success, these features only capture appearance information. On the other hand, motion cues provide discriminative and complementary information that can improve tracking performance. Contrary to visual tracking, deep motion features have been successfully applied for action recognition and video classification tasks. Typically, the motion features are learned by training a CNN on optical flow images extracted from large amounts of labeled videos. This paper presents an investigation of the impact of deep motion features in a tracking-by-detection framework. We further show that hand-crafted, deep RGB, and deep motion features contain complementary information. To the best of our knowledge, we are the first to propose fusing appearance information with deep motion features for visual tracking. Comprehensive experiments clearly suggest that our fusion approach with deep motion features outperforms standard methods relying on appearance information alone.Comment: ICPR 2016. Best paper award in the "Computer Vision and Robot Vision" trac

    Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect

    Full text link
    Microsoft Kinect camera and its skeletal tracking capabilities have been embraced by many researchers and commercial developers in various applications of real-time human movement analysis. In this paper, we evaluate the accuracy of the human kinematic motion data in the first and second generation of the Kinect system, and compare the results with an optical motion capture system. We collected motion data in 12 exercises for 10 different subjects and from three different viewpoints. We report on the accuracy of the joint localization and bone length estimation of Kinect skeletons in comparison to the motion capture. We also analyze the distribution of the joint localization offsets by fitting a mixture of Gaussian and uniform distribution models to determine the outliers in the Kinect motion data. Our analysis shows that overall Kinect 2 has more robust and more accurate tracking of human pose as compared to Kinect 1.Comment: 10 pages, IEEE International Conference on Healthcare Informatics 2015 (ICHI 2015
    corecore