56 research outputs found

    Human motion modeling and simulation by anatomical approach

    Get PDF
    To instantly generate desired infinite realistic human motion is still a great challenge in virtual human simulation. In this paper, the novel emotion effected motion classification and anatomical motion classification are presented, as well as motion capture and parameterization methods. The framework for a novel anatomical approach to model human motion in a HTR (Hierarchical Translations and Rotations) file format is also described. This novel anatomical approach in human motion modelling has the potential to generate desired infinite human motion from a compact motion database. An architecture for the real-time generation of new motions is also propose

    Unsupervised Segmentation of Action Segments in Egocentric Videos using Gaze

    Full text link
    Unsupervised segmentation of action segments in egocentric videos is a desirable feature in tasks such as activity recognition and content-based video retrieval. Reducing the search space into a finite set of action segments facilitates a faster and less noisy matching. However, there exist a substantial gap in machine understanding of natural temporal cuts during a continuous human activity. This work reports on a novel gaze-based approach for segmenting action segments in videos captured using an egocentric camera. Gaze is used to locate the region-of-interest inside a frame. By tracking two simple motion-based parameters inside successive regions-of-interest, we discover a finite set of temporal cuts. We present several results using combinations (of the two parameters) on a dataset, i.e., BRISGAZE-ACTIONS. The dataset contains egocentric videos depicting several daily-living activities. The quality of the temporal cuts is further improved by implementing two entropy measures.Comment: To appear in 2017 IEEE International Conference On Signal and Image Processing Application

    Human Motion Capture Data Tailored Transform Coding

    Full text link
    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed

    Enhancing Multimedia Search Using Human Motion

    Get PDF
    Over the last few years, there has been an increase in the number of multimedia-enabled devices (e.g. cameras, smartphones, etc.) and that has led to a vast quantity of multimedia content being shared on the Internet. For example, in 2010 thirteen million hours of video uploaded to YouTube (http://www.youtube.com). To usefully navigate this vast amount of information, users currently rely on search engines, social networks and dedicated multimedia websites (such as YouTube) to find relevant content. Efficient search of large collections of multimedia requires metadata that is human-meaningful, but currently multimedia sites generally utilize metadata derived from user-entered tags and descriptions. These are often vague, ambiguous or left blank, which makes search for video content unreliable or misleading. Furthermore, a large majority of videos contain people, and consequently, human movement, which is often not described in the user entered metadata

    Differences in EMG burst patterns during grasping dexterity tests and activities of daily living

    Get PDF
    The aim of this study was to characterize the muscle activation patterns which underlie the performance of two commonly used grasping patterns and compare the characteristics of such patterns during dexterity tests and activities of daily living. EMG of flexor digitorum and extensor digitorum were monitored from 6 healthy participants as they performed three tasks related to activities of daily living (picking up a coin, drinking from a cup, feeding with a spoon) and three dexterity tests (Variable Dexterity Test-Precision, Variable Dexterity Test-Cylinder, Purdue Pegboard Test). A ten-camera motion capture system was used to simultaneously acquire kinematics of index and middle fingers. Spatiotemporal aspects of the EMG signals were analyzed and compared to metacarpophalangeal joint angle of index and middle fingers. The work has shown that a common rehabilitation test such as the Purdue Pegboard test is a poor representation of the muscle activation patterns for activities of daily living. EMG and joint angle patterns from the Variable Dexterity Tests which has been designed to more accurately reflect a range of ADl's were consistently comparable with tasks requiring precision and cylinder grip, reaffirming the importance of object size and shape when attempting to accurately assess hand function

    Automated Motion Synthesis for Virtual Choreography

    Get PDF
    In this paper, we present a technique to automati-cally synthesize dancing moves for arbitrary songs. Our current implementation is for virtual characters, but it is easy to use the same algorithms for entertainer robots, such as robotic dancers, which fits very well to this year’s conference theme. Our technique is based on analyzing a musical tune (can be a song or melody) and synthesizing a motion for the virtual character where the character’s movement synchronizes to the musical beats. In order to analyze beats of the tune, we developed a fast and novel algorithm. Our motion synthesis algorithm analyze library of stock motions and generates new sequences of movements that were not described in the library. We present two algorithms to synchronize dance moves and musical beats: a fast greedy algorithm, and a genetic algorithm. Our experimental results show that we can generate new sequences of dance figures in which the dancer reacts to music and dances in synchronization with the music
    corecore