7 research outputs found

    3D feedback and observation for motor learning: Application to the roundoff movement in gymnastics

    Get PDF
    In this paper, we assessed the efficacy of different types of visual information for improving the execution of the roundoff movement in gymnastics. Specifically, two types of 3D feedback were compared to a 3D visualization only displaying the movement of the expert (observation) as well as to a more ‘traditional’ video observation. The improvement in movement execution was measured using different methods, namely subjective evaluations performed by official judges, and more ’quantitative appraisals based on time series analyses. Video demonstration providing information about the expert and 3D feedback (i.e., using 3D representation of the movement in monoscopic vision) combining information about the movement of the expert and the movement of the learner were the two types of feedback giving rise to the best improvement of movement execution, as subjectively evaluated by judges. Much less conclusive results were obtained when assessing movement execution using quantification methods based on time series analysis. Correlation analyses showed that the subjective evaluation performed by the judges can hardly be predicted/ explained by the ‘more objective’ results of time series analyses

    Realizing a Low-latency Virtual Reality Environment for Motor Learning

    Get PDF
    Waltemate T, Hülsmann F, Pfeiffer T, Kopp S, Botsch M. Realizing a Low-latency Virtual Reality Environment for Motor Learning. In: Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology. VRST '15. New York, NY, USA: ACM; 2015: 139-147.Virtual Reality (VR) has the potential to support motor learning in ways exceeding beyond the possibilities provided by real world environments. New feedback mechanisms can be implemented that support motor learning during the performance of the trainee and afterwards as a performance review. As a consequence, VR environments excel in controlled evaluations, which has been proven in many other application scenarios. However, in the context of motor learning of complex tasks, including full-body movements, questions regarding the main technical parameters of such a system, in particular that of the required maximum latency, have not been addressed in depth. To fill this gap, we propose a set of requirements towards VR systems for motor learning, with a special focus on motion capturing and rendering. We then assess and evaluate state-of-the-art techniques and technologies for motion capturing and rendering, in order to provide data on latencies for different setups. We focus on the end-to-end latency of the overall system, and present an evaluation of an exemplary system that has been developed to meet these requirements

    Virtual Movement from Natural Language Text

    Get PDF
    It is a challenging task for machines to follow a textual instruction. Properly understanding and using the meaning of the textual instruction in some application areas, such as robotics, animation, etc. is very difficult for machines. The interpretation of textual instructions for the automatic generation of the corresponding motions (e.g. exercises) and the validation of these movements are difficult tasks. To achieve our initial goal of having machines properly understand textual instructions and generate some motions accordingly, we recorded five different exercises in random order with the help of seven amateur performers using a Microsoft Kinect device. During the recording, we found that the same exercise was interpreted differently by each human performer even though they were given identical textual instructions. We performed a quality assessment study based on the derived data using a crowdsourcing approach. Later, we tested the inter-rater agreement for different types of visualization, and found the RGB-based visualization showed the best agreement among the annotatorsa animation with a virtual character standing in second position. In the next phase we worked with physical exercise instructions. Physical exercise is an everyday activity domain in which textual exercise descriptions are usually focused on body movements. Body movements are considered to be a common element across a broad range of activities that are of interest for robotic automation. Our main goal is to develop a text-to-animation system which we can use in different application areas and which we can also use to develop multiple-purpose robots whose operations are based on textual instructions. This system could be also used in different text to scene and text to animation systems. To generate a text-based animation system for physical exercises the process requires the robot to have natural language understanding (NLU) including understanding non-declarative sentences. It also requires the extraction of semantic information from complex syntactic structures with a large number of potential interpretations. Despite a comparatively high density of semantic references to body movements, exercise instructions still contain large amounts of underspecified information. Detecting, and bridging and/or filling such underspecified elements is extremely challenging when relying on methods from NLU alone. However, humans can often add such implicit information with ease due to its embodied nature. We present a process that contains the combination of a semantic parser and a Bayesian network. In the semantic parser, the system extracts all the information present in the instruction to generate the animation. The Bayesian network adds some brain to the system to extract the information that is implicit in the instruction. This information is very important for correctly generating the animation and is very easy for a human to extract but very difficult for machines. Using crowdsourcing, with the help of human brains, we updated the Bayesian network. The combination of the semantic parser and the Bayesian network explicates the information that is contained in textual movement instructions so that an animation execution of the motion sequences performed by a virtual humanoid character can be rendered. To generate the animation from the information we basically used two different types of Markup languages. Behaviour Markup Language is used for 2D animation. Humanoid Animation uses Virtual Reality Markup Language for 3D animation

    Creating a Virtual Mirror for Motor Learning in Virtual Reality

    Get PDF
    Waltemate T. Creating a Virtual Mirror for Motor Learning in Virtual Reality. Bielefeld: Universität Bielefeld; 2018

    Motor Learning in Virtual Reality: From Motion to Augmented Feedback

    Get PDF
    Hülsmann F. Motor Learning in Virtual Reality: From Motion to Augmented Feedback. Bielefeld: Universität Bielefeld; 2019.Sports and fitness exercises are an important factor in health improvement. The acquisition of new movements - motor learning - and the improvement of techniques for already learned ones are a vital part of sports training. Ideally, this part is supervised and supported by coaches. They know how to correctly perform specific exercises and how to prevent typical movement errors. However, coaches are not always available or do not have enough time to fully supervise training sessions. Virtual reality (VR) is an ideal medium to support motor learning in the absence of coaches. VR systems could supervise performed movements, visualize movement patterns, and identify errors that are performed by a trainee. Further, feedback could be provided that even extends the possibilities of coaching in the real world. Still, core concepts that form the basis of effective coaching applications in VR are not yet fully developed. In order to diminish this gap, we focus on the processing of kinematic data as one of the core components for motor learning. Based on the processing of kinematic data in real-time, a coaching system can supervise a trainee and provide varieties of multi-modal feedback strategies. For motor learning, this thesis explores the development of core concepts based on the usage of kinematic data in three areas. First, the movement that is performed by a trainee must be observed and visualized in real-time. The observation can be achieved by state-of-the-art motion capture techniques. Concerning the visualization, in the real world, trainees can observe their own performance in mirrors. We use a virtual mirror as a paradigm to allow trainees to observe their own movement in a natural way. A well established feedback strategy from real-world coaching, namely improvement via observation of a target performance, is transfered into the virtual mirror paradigm. Second, a system that focuses on motor learning should be able to assess the performance that it observes. For instance, typical errors in a trainee's performance must be detected as soon as possible in order to react in an effective way. Third, the motor learning environment should be able to provide suitable feedback strategies based on detected errors. In this thesis, real-time feedback based on error detection is integrated inside a coaching cycle that is inspired from real-world coaching. In a final evaluation, all the concepts are brought together in a VR coaching system. We demonstrate that this system is able to help trainees in improving their motor performance with respect to specific error patterns. Finally, based on the results throughout the thesis, helpful guidelines in order to develop effective environments for motor learning in VR are proposed

    Comparing modalities for kinesiatric exercise instruction

    No full text
    corecore