136,792 research outputs found

    A multi-modal dance corpus for research into real-time interaction between humans in online virtual environments

    Get PDF
    We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology are locally available to them, can learn choerographies with teacher guidance in an online virtual ballet studio. As the data corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers perform a number of fixed choreographies, which are both graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. Although the data corpus is tailored specifically for an online dance class application scenario, the data is free to download and used for any research and development purposes

    A smart tool for the diagnosis of Parkinsonian syndrome using wireless watches

    Get PDF
    This work is licensed under a Creative Commons Attribution 3.0 License.Early detection and diagnosis of Parkinson disease will provide a good chance for patients to take early actions and prevent its further development. In this paper, a smart tool for the diagnosis of Parkinsonian syndromes is designed and developed using low-cost Texas Instruments eZ430-Chronos wireless watches. With this smart tool, Parkinson Bradykinesia is detected based on the cycle of a human gait, with the watch worn on the foot, and Parkinson Tremor shaking is detected and differed by frequency 0 to 8 Hz on the arm in real-time with a developed statistical diagnosis chart. It can be used in small clinics as well as home environment due to its low-cost and easy-use property

    Concurrence-Aware Long Short-Term Sub-Memories for Person-Person Action Recognition

    Full text link
    Recently, Long Short-Term Memory (LSTM) has become a popular choice to model individual dynamics for single-person action recognition due to its ability of modeling the temporal information in various ranges of dynamic contexts. However, existing RNN models only focus on capturing the temporal dynamics of the person-person interactions by naively combining the activity dynamics of individuals or modeling them as a whole. This neglects the inter-related dynamics of how person-person interactions change over time. To this end, we propose a novel Concurrence-Aware Long Short-Term Sub-Memories (Co-LSTSM) to model the long-term inter-related dynamics between two interacting people on the bounding boxes covering people. Specifically, for each frame, two sub-memory units store individual motion information, while a concurrent LSTM unit selectively integrates and stores inter-related motion information between interacting people from these two sub-memory units via a new co-memory cell. Experimental results on the BIT and UT datasets show the superiority of Co-LSTSM compared with the state-of-the-art methods

    Human motion modeling and simulation by anatomical approach

    Get PDF
    To instantly generate desired infinite realistic human motion is still a great challenge in virtual human simulation. In this paper, the novel emotion effected motion classification and anatomical motion classification are presented, as well as motion capture and parameterization methods. The framework for a novel anatomical approach to model human motion in a HTR (Hierarchical Translations and Rotations) file format is also described. This novel anatomical approach in human motion modelling has the potential to generate desired infinite human motion from a compact motion database. An architecture for the real-time generation of new motions is also propose

    Capturing Hands in Action using Discriminative Salient Points and Physics Simulation

    Full text link
    Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.Comment: Accepted for publication by the International Journal of Computer Vision (IJCV) on 16.02.2016 (submitted on 17.10.14). A combination into a single framework of an ECCV'12 multicamera-RGB and a monocular-RGBD GCPR'14 hand tracking paper with several extensions, additional experiments and detail
    corecore