3 research outputs found

    Grand Challenges in SportsHCI

    Get PDF
    The field of Sports Human-Computer Interaction (SportsHCI) investigates interaction design to support a physically active human being. Despite growing interest and dissemination of SportsHCI literature over the past years, many publications still focus on solving specific problems in a given sport. We believe in the benefit of generating fundamental knowledge for SportsHCI more broadly to advance the field as a whole. To achieve this, we aim to identify the grand challenges in SportsHCI, which can help researchers and practitioners in developing a future research agenda. Hence, this paper presents a set of grand challenges identified in a five-day workshop with 22 experts who have previously researched, designed, and deployed SportsHCI systems. Addressing these challenges will drive transformative advancements in SportsHCI, fostering better athlete performance, athlete-coach relationships, spectator engagement, but also immersive experiences for recreational sports or exercisemotivation, and ultimately, improve human well-being

    VR4VRT: Virtual Reality for Virtual Rowing Training

    Get PDF
    Learning how to properly row is an intensive and injury prone process. Ergometers (i.e., rowing machines) are used as a preparatory step to train the basic steps of rowing, as well as to further improve technique and maintain stamina. In this paper we present three design iterations resulting in a new interactive system that utilizes the commercially available dynamic ergometer (RP3). Our proposed software-hardware combination uses the RP3 with the HTC Vive platform extended with three location trackers, from which the software is available on request. We will showcase how this facilitates a range of opportunities

    A super-bagging method for volleyball action recognition using wearable sensors

    Get PDF
    Access to performance data during matches and training sessions is important for coaches and players. Although there are many video tagging systems available which can provide such access, these systems require manual effort. Data from Inertial Measurement Units (IMU) could be used for automatically tagging video recordings in terms of players’ actions. However, the data gathered during volleyball sessions are generally very imbalanced, since for an individual player most time intervals can be classified as “non-actions” rather than “actions”. This makes automatic annotation of video recordings of volleyball matches a challenging machine-learning problem. To address this problem, we evaluated balanced and imbalanced learning methods with our newly proposed ‘super-bagging’ method for volleyball action modelling. All methods are evaluated using six classifiers and four sensors (i.e., accelerometer, magnetometer, gyroscope and barometer). We demonstrate that imbalanced learning provides better unweighted average recall, (UAR = 83.99%) for the non-dominant hand using a naive Bayes classifier than balanced learning, while balanced learning provides better performance (UAR = 84.18%) for the dominant hand using a tree bagger classifier than imbalanced learning. Our super-bagging method provides the best UAR (84.19%). It is also noted that the super-bagging method provides better averaged UAR than balanced and imbalanced methods in 8 out of 10 cases, hence demonstrating the potential of the super-bagging method for IMU’s sensor data. One of the potential applications of these novel models is fatigue and stamina estimation e.g., by keeping track of how many actions a player is performing and when these are being performed
    corecore