2 research outputs found

    Using Augmented Reality for real-time feedback to enhance the execution of the squat.

    Get PDF
    The importance of exercise and strength training has been emphasised, yet it is shown that the number of people who do not reach the average recommended hours of exercise has increased (WHO, 2020). Currently, a range of physical fitness products employs the use of technology. These products focus on providing engaging experiences but do not provide personalised real-time feedback to improve the execution of the exercise and reduce the risk of injuries. Hence, this research aims to explore the effectiveness of AR technology in providing real-time visual feedback for squat motion. Furthermore, which type of visual feedback is most effective for reducing errors in squat performance is also explored. This prototype includes a large screen that shows a mirror image of the participant as they perform squats with four different types of real-time visual feedback implemented. The motion of the participants was captured using the Kinect v2 system. This prototype focuses on giving feedback about the knee valgus error, which commonly occurs during the squat motion. The four visual feedback types implemented are Traffic, Arrow, Avatar, and All-in-One. A user study with twenty participants was conducted to evaluate the feedback methods. The participants performed ten squats for each type of visual feedback, and their performance was measured with the frequency of the good, moderate, and poor squats they performed. A User Experience Questionnaire (UEQ) and a post-experiment interview were also conducted to measure their preferences and opinions regarding visual feedback. The results showed that Arrow outperformed the other conditions in terms of performance, followed by All-in-One, Traffic and Avatar. However, the majority of participants preferred Traffic, Arrow, All-in-One and Avatar in the descending order of preferences. The participants could further be categorised into two groups, a beginner and an advanced group. It was found that the beginner group preferred All-in-One, Arrow, Traffic and Avatar, in descending order. For the advanced group, in descending order, their performance ranked with Arrow to be best and followed by Traffic, All-in-One and Avatar. However, the majority preferred Traffic, followed by Arrow, Avatar and All-in-One. The difference in performance results between the two groups can be attributed to the beginner group participants needing more information to improve their performance. In contrast, the advanced group benefits from a more straightforward and more intuitive visual feedback type since they already have sufficient knowledge. Future work could include a lateral view of the squat motion which would deliver more information to the user. Lastly, this prototype design can be extended to detect other types of errors users often perform during the squat motion or other strength training exercises or sports

    Image-based clothes transfer

    No full text
    Figure 1: (a) shows the concept of image-based clothes transfer: a user is dressed with the clothing of a previously recorded user. (b) shows an illustration of the virtual dressing room in which a user is captured and at the same time can see himself wearing different garments. (c) and (d) show results for a t-shirt over a sweater and jeans. Virtual dressing rooms for the fashion industry and digital entertainment applications aim at creating an image or a video of a user in which he or she wears different garments than in the real world. Such images can be displayed, for example, in a magic mirror shopping application or in games and movies. Current solutions involve the error-prone task of body pose tracking. We suggest an approach that allows users who are captured by a set of cameras to be virtually dressed with previously recorded garments in 3D. By using image-based algorithms, we can bypass critical components of other systems, especially tracking based on skeleton models. We rather transfer the appearance of a garment from one user to another by image processing and image-based rendering. Using images of real garments allows for photo-realistic rendering quality with high performance
    corecore