462 research outputs found

    A multi-modal dance corpus for research into real-time interaction between humans in online virtual environments

    Get PDF
    We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology are locally available to them, can learn choerographies with teacher guidance in an online virtual ballet studio. As the data corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers perform a number of fixed choreographies, which are both graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. Although the data corpus is tailored specifically for an online dance class application scenario, the data is free to download and used for any research and development purposes

    Enhanced visualisation of dance performance from automatically synchronised multimodal recordings

    Get PDF
    The Huawei/3DLife Grand Challenge Dataset provides multimodal recordings of Salsa dancing, consisting of audiovisual streams along with depth maps and inertial measurements. In this paper, we propose a system for augmented reality-based evaluations of Salsa dancer performances. An essential step for such a system is the automatic temporal synchronisation of the multiple modalities captured from different sensors, for which we propose efficient solutions. Furthermore, we contribute modules for the automatic analysis of dance performances and present an original software application, specifically designed for the evaluation scenario considered, which enables an enhanced dance visualisation experience, through the augmentation of the original media with the results of our automatic analyses

    An advanced virtual dance performance evaluator

    Get PDF
    The ever increasing availability of high speed Internet access has led to a leap in technologies that support real-time realistic interaction between humans in online virtual environments. In the context of this work, we wish to realise the vision of an online dance studio where a dance class is to be provided by an expert dance teacher and to be delivered to online students via the web. In this paper we study some of the technical issues that need to be addressed in this challenging scenario. In particular, we describe an automatic dance analysis tool that would be used to evaluate a student's performance and provide him/her with meaningful feedback to aid improvement

    Dealing with messy problems: lessons from water harvesting systems for crop production in Burkina Faso

    Get PDF
    Despite the identification of areas exhibiting successful adoption and use of water harvesting technologies (WHTs) by small-scale farmers in SSA, on the whole WHT use remains low and hence impacts on crop production and livelihoods marginal. Past research has determined the importance of social factors in the adoption and use of WHTs, but little attempt has been made to fully understand their role. This paper presents qualitative, micro level research conducted in Botswana and Burkina Faso that has increased understanding of the effect of social factors. The main lesson learnt is that WHTs sit within a highly complex and dynamic system and the problem of low adoption and use cannot be solved using approaches that attempt to over-simplify it. Ensuring the sustainability of WHTs into the future requires that the complexity and messiness of the system is fully embraced by researchers and practitioners seeking solutions

    Scaling Experiments on the Dynamics and Acoustics of Travelling Bubble Cavitation

    Get PDF
    Ceccio and Brennen (1991 and 1989) recently examined the interaction between individual cavitation bubbles and the structure of the boundary layer and flow field in which the bubble is growing and collapsing. They were able to show that individual bubbles are often fissioned by the fluid shear and that this process can significantly effect the acoustic signal produced by the collapse. More recently Kumar and Brennen (1991-1992) have closely examined further statistical properties of the acoustical signals from individual cavitation bubbles on two different headforms in order to learn more about the bubble/flow interactions. All of these experiments were, however, conducted in the same facility with the same size of headform (5.08cm in diameter) and over a fairly narrow range of flow velocities (around 9m/s). Clearly this raises the issue of how the phenomena identified change with speed, scale and facility. The present paper describes experiments conducted in order to try to answer some of these important questions regarding the scaling of the cavitation phenomena. The experiments were conducted in the Large Cavitation Channel of the David Taylor Research Center in Memphis Tennessee, on similar Schiebe headforms which are 5.08, 25.4 and 50.8cm in diameter for speeds ranging up to 15m/s and for a range of cavitation numbers

    Kinect vs. low-cost inertial sensing For gesture recognition

    Get PDF
    In this paper, we investigate efficient recognition of human gestures / movements from multimedia and multimodal data, including the Microsoft Kinect and translational and rotational acceleration and velocity from wearable inertial sensors. We firstly present a system that automatically classifies a large range of activities (17 different gestures) using a random forest decision tree. Our system can achieve near real time recognition by appropriately selecting the sensors that led to the greatest contributing factor for a particular task. Features extracted from multimodal sensor data were used to train and evaluate a customized classifier. This novel technique is capable of successfully classifying var- ious gestures with up to 91 % overall accuracy on a publicly available data set. Secondly we investigate a wide range of different motion capture modalities and compare their results in terms of gesture recognition accu- racy using our proposed approach. We conclude that gesture recognition can be effectively performed by considering an approach that overcomes many of the limitations associated with the Kinect and potentially paves the way for low-cost gesture recognition in unconstrained environments

    Automatic activity classification and movement assessment during a sports training session using wearable inertial sensors

    Get PDF
    Motion analysis technologies have been widely used to monitor the potential for injury and enhance athlete performance. However, most of these technologies are expensive, can only be used in laboratory environments and examine only a few trials of each movement action. In this paper, we present a novel ambulatory motion analysis framework using wearable inertial sensors to accurately assess all of an athlete’s activities in an outdoor training environment. We firstly present a system that automatically classifies a large range of training activities using the Discrete Wavelet Transform (DWT) in conjunction with a Random forest classifier. The classifier is capable of successfully classifying various activities with up to 98% accuracy. Secondly, a computationally efficient gradient descent algorithm is used to estimate the relative orientations of the wearable inertial sensors mounted on the thigh and shank of a subject, from which the flexion-extension knee angle is calculated. Finally, a curve shift registration technique is applied to both generate normative data and determine if a subject’s movement technique differed to the normative data in order to identify potential injury related factors. It is envisaged that the proposed framework could be utilized for accurate and automatic sports activity classification and reliable movement technique evaluation in various unconstrained environments

    Towards automatic activity classification and movement assessment during a sports training session

    Get PDF
    Abstract—Motion analysis technologies have been widely used to monitor the potential for injury and enhance athlete perfor- mance. However, most of these technologies are expensive, can only be used in laboratory environments and examine only a few trials of each movement action. In this paper, we present a novel ambulatory motion analysis framework using wearable inertial sensors to accurately assess all of an athlete’s activities in real training environment. We firstly present a system that automatically classifies a large range of training activities using the Discrete Wavelet Transform (DWT) in conjunction with a Random forest classifier. The classifier is capable of successfully classifying various activities with up to 98% accuracy. Secondly, a computationally efficient gradient descent algorithm is used to estimate the relative orientations of the wearable inertial sensors mounted on the shank, thigh and pelvis of a subject, from which the flexion-extension knee and hip angles are calculated. These angles, along with sacrum impact accelerations, are automatically extracted for each stride during jogging. Finally, normative data is generated and used to determine if a subject’s movement technique differed to the normative data in order to identify potential injury related factors. For the joint angle data this is achieved using a curve-shift registration technique. It is envisaged that the proposed framework could be utilized for accurate and automatic sports activity classification and reliable movement technique evaluation in various unconstrained environments for both injury management and performance enhancement
    corecore