464 research outputs found
A Survey of Deep Learning in Sports Applications: Perception, Comprehension, and Decision
Deep learning has the potential to revolutionize sports performance, with
applications ranging from perception and comprehension to decision. This paper
presents a comprehensive survey of deep learning in sports performance,
focusing on three main aspects: algorithms, datasets and virtual environments,
and challenges. Firstly, we discuss the hierarchical structure of deep learning
algorithms in sports performance which includes perception, comprehension and
decision while comparing their strengths and weaknesses. Secondly, we list
widely used existing datasets in sports and highlight their characteristics and
limitations. Finally, we summarize current challenges and point out future
trends of deep learning in sports. Our survey provides valuable reference
material for researchers interested in deep learning in sports applications
Deep Learning-Based Action Recognition
The classification of human action or behavior patterns is very important for analyzing situations in the field and maintaining social safety. This book focuses on recent research findings on recognizing human action patterns. Technology for the recognition of human action pattern includes the processing technology of human behavior data for learning, technology of expressing feature values of images, technology of extracting spatiotemporal information of images, technology of recognizing human posture, and technology of gesture recognition. Research on these technologies has recently been conducted using general deep learning network modeling of artificial intelligence technology, and excellent research results have been included in this edition
Automatic learning of 3D pose variability in walking performances for gait analysis
This paper proposes an action specific model which automatically learns the variability of 3D human postures observed in a set of training sequences. First, a Dynamic Programing synchronization algorithm is presented in order to establish a mapping between postures from different walking cycles, so the whole training set can be synchronized to a common time pattern. Then, the model is trained using the public CMU motion capture dataset for the walking action, and a mean walking performance is automatically learnt. Additionally statistics about the observed variability of the postures and motion direction are also computed at each time step. As a result, in this work we have extended a similar action model successfully used for tracking, by providing facilities for gait analysis and gait recognition applications.Peer ReviewedPreprin
Recommended from our members
Movement recognition from wearable sensors data: power-aware evolutionary training for template matching and data annotation recovery methods
Human activities recognition finds numerous applications for example in sport training, patient rehabilitation, gait analysis and surgical skills evaluation. Wearable sensing and Template Matching Methods (TMMs) offer significant advantages compared to manual assessment methods as well as to more cumbersome camera-based setups and other machine learning (ML) algorithms.
TMMs require less data for training than other ML methods, they are low-power and therefore suitable for integration on wearable sensor. They compute a sample-by-sample distance between two time series. When applied to gestures sensors data, this even enables a richer and more movement-specific assessment and feedback. However, TMMs lack of a standard training procedure.
In this thesis, we introduce an innovative evolutionary training algorithm for TMMthat not only can maximize recognition performance, but it can also prefer power-minimisation by reducing the TMM’s computational cost with a configurable trade-off. We exhibit that a reduction is even possible without sacrificing recognition performance by exploiting the long-established concept of “time warping”. We demonstrate that our method is suitable for a wide variety of raw data as well as processed, fused and encoded sensor data.
We present a new original multi-modal, multi-user dataset of beach volleyball movements that allowed to evaluate our training methods on a real-case of sport training actions. Moreover, the collection of this dataset helped to generate a set of guidelines for the collection of movement data in the wild, using wearable sensors.
We introduce a 3D human model that can be animated through inertial wearable sensors data for troubleshooting, movement analysis and privacy-safe annotation of human activities. Finally, through a case study on a dataset of drinking actions, we demonstrate how TMM can improve the quality of a badly annotated but highly valuable dataset
Multi-sensor human action recognition with particular application to tennis event-based indexing
The ability to automatically classify human actions and activities using vi- sual sensors or by analysing body worn sensor data has been an active re- search area for many years. Only recently with advancements in both fields and the ubiquitous nature of low cost sensors in our everyday lives has auto- matic human action recognition become a reality. While traditional sports coaching systems rely on manual indexing of events from a single modality, such as visual or inertial sensors, this thesis investigates the possibility of cap- turing and automatically indexing events from multimodal sensor streams. In this work, we detail a novel approach to infer human actions by fusing multimodal sensors to improve recognition accuracy. State of the art visual action recognition approaches are also investigated. Firstly we apply these action recognition detectors to basic human actions in a non-sporting con- text. We then perform action recognition to infer tennis events in a tennis court instrumented with cameras and inertial sensing infrastructure. The system proposed in this thesis can use either visual or inertial sensors to au- tomatically recognise the main tennis events during play. A complete event retrieval system is also presented to allow coaches to build advanced queries, which existing sports coaching solutions cannot facilitate, without an inordi- nate amount of manual indexing. The event retrieval interface is evaluated against a leading commercial sports coaching tool in terms of both usability and efficiency
Towards automatic activity classification and movement assessment during a sports training session
Abstract—Motion analysis technologies have been widely used to monitor the potential for injury and enhance athlete perfor- mance. However, most of these technologies are expensive, can only be used in laboratory environments and examine only a few trials of each movement action. In this paper, we present a novel ambulatory motion analysis framework using wearable inertial sensors to accurately assess all of an athlete’s activities in real training environment. We firstly present a system that automatically classifies a large range of training activities using the Discrete Wavelet Transform (DWT) in conjunction with a Random forest classifier. The classifier is capable of successfully classifying various activities with up to 98% accuracy. Secondly, a computationally efficient gradient descent algorithm is used to estimate the relative orientations of the wearable inertial sensors mounted on the shank, thigh and pelvis of a subject, from which the flexion-extension knee and hip angles are calculated. These angles, along with sacrum impact accelerations, are automatically extracted for each stride during jogging. Finally, normative data is generated and used to determine if a subject’s movement technique differed to the normative data in order to identify potential injury related factors. For the joint angle data this is achieved using a curve-shift registration technique. It is envisaged that the proposed framework could be utilized for accurate and automatic sports activity classification and reliable movement technique evaluation in various unconstrained environments for both injury management and performance enhancement
Anomaly Detection, Rule Adaptation and Rule Induction Methodologies in the Context of Automated Sports Video Annotation.
Automated video annotation is a topic of considerable interest in computer vision due to its applications in video search, object based video encoding and enhanced broadcast content. The domain of sport broadcasting is, in particular, the subject of current research attention due to its fixed, rule governed, content. This research work aims to develop, analyze and demonstrate novel methodologies that can be useful in the context of adaptive and automated video annotation systems. In this thesis, we present methodologies for addressing the problems of anomaly detection, rule adaptation and rule induction for court based sports such as tennis and badminton. We first introduce an HMM induction strategy for a court-model based method that uses the court structure in the form of a lattice for two related modalities of singles and doubles tennis to tackle the problems of anomaly detection and rectification. We also introduce another anomaly detection methodology that is based on the disparity between the low-level vision based classifiers and the high-level contextual classifier. Another approach to address the problem of rule adaptation is also proposed that employs Convex hulling of the anomalous states. We also investigate a number of novel hierarchical HMM generating methods for stochastic induction of game rules. These methodologies include, Cartesian product Label-based Hierarchical Bottom-up Clustering (CLHBC) that employs prior information within the label structures. A new constrained variant of the classical Chinese Restaurant Process (CRP) is also introduced that is relevant to sports games. We also propose two hybrid methodologies in this context and a comparative analysis is made against the flat Markov model. We also show that these methods are also generalizable to other rule based environments
- …