202 research outputs found

    Vision Based Activity Recognition Using Machine Learning and Deep Learning Architecture

    Get PDF
    Human Activity recognition, with wide application in fields like video surveillance, sports, human interaction, elderly care has shown great influence in upbringing the standard of life of people. With the constant development of new architecture, models, and an increase in the computational capability of the system, the adoption of machine learning and deep learning for activity recognition has shown great improvement with high performance in recent years. My research goal in this thesis is to design and compare machine learning and deep learning models for activity recognition through videos collected from different media in the field of sports. Human activity recognition (HAR) mostly is to recognize the action performed by a human through the data collected from different sources automatically. Based on the literature review, most data collected for analysis is based on time series data collected through different sensors and video-based data collected through the camera. So firstly, our research analyzes and compare different machine learning and deep learning architecture with sensor-based data collected from an accelerometer of a smartphone place at different position of the human body. Without any hand-crafted feature extraction methods, we found that deep learning architecture outperforms most of the machine learning architecture and the use of multiple sensors has higher accuracy than a dataset collected from a single sensor. Secondly, as collecting data from sensors in real-time is not feasible in all the fields such as sports, we study the activity recognition by using the video dataset. For this, we used two state-of-the-art deep learning architectures previously trained on the big, annotated dataset using transfer learning methods for activity recognition in three different sports-related publicly available datasets. Extending the study to the different activities performed on a single sport, and to avoid the current trend of using special cameras and expensive set up around the court for data collection, we developed our video dataset using sports coverage of basketball games broadcasted through broadcasting media. The detailed analysis and experiments based on different criteria such as range of shots taken, scoring activities is presented for 8 different activities using state-of-art deep learning architecture for video classification

    Automatic visual detection of human behavior: a review from 2000 to 2014

    Get PDF
    Due to advances in information technology (e.g., digital video cameras, ubiquitous sensors), the automatic detection of human behaviors from video is a very recent research topic. In this paper, we perform a systematic and recent literature review on this topic, from 2000 to 2014, covering a selection of 193 papers that were searched from six major scientific publishers. The selected papers were classified into three main subjects: detection techniques, datasets and applications. The detection techniques were divided into four categories (initialization, tracking, pose estimation and recognition). The list of datasets includes eight examples (e.g., Hollywood action). Finally, several application areas were identified, including human detection, abnormal activity detection, action recognition, player modeling and pedestrian detection. Our analysis provides a road map to guide future research for designing automatic visual human behavior detection systems.This work is funded by the Portuguese Foundation for Science and Technology (FCT - Fundacao para a Ciencia e a Tecnologia) under research Grant SFRH/BD/84939/2012

    Enhancing volleyball training:empowering athletes and coaches through advanced sensing and analysis

    Get PDF
    Modern sensing technologies and data analysis methods usher in a new era for sports training and practice. Hidden insights can be uncovered and interactive training environments can be created by means of data analysis. We present a system to support volleyball training which makes use of Inertial Measurement Units, a pressure sensitive display floor, and machine learning techniques to automatically detect relevant behaviours and provides the user with the appropriate information. While working with trainers and amateur athletes, we also explore potential applications that are driven by automatic action recognition, that contribute various requirements to the platform. The first application is an automatic video-tagging protocol that marks key events (captured on video) based on the automatic recognition of volleyball-specific actions with an unweighted average recall of 78.71% in the 10-fold cross-validation setting with convolution neural network and 73.84% in leave-one-subject-out cross-validation setting with active data representation method using wearable sensors, as an exemplification of how dashboard and retrieval systems would work with the platform. In the context of action recognition, we have evaluated statistical functions and their transformation using active data representation besides raw signal of IMUs sensor. The second application is the “bump-set-spike” trainer, which uses automatic action recognition to provide real-time feedback about performance to steer player behaviour in volleyball, as an example of rich learning environments enabled by live action detection. In addition to describing these applications, we detail the system components and architecture and discuss the implications that our system might have for sports in general and for volleyball in particular.</p

    Interfaces for human-centered production and use of computer graphics assets

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Wearable Sensors and Smart Devices to Monitor Rehabilitation Parameters and Sports Performance: An Overview

    Get PDF
    A quantitative evaluation of kinetic parameters, the joint’s range of motion, heart rate, and breathing rate, can be employed in sports performance tracking and rehabilitation monitoring following injuries or surgical operations. However, many of the current detection systems are expensive and designed for clinical use, requiring the presence of a physician and medical staff to assist users in the device’s positioning and measurements. The goal of wearable sensors is to overcome the limitations of current devices, enabling the acquisition of a user’s vital signs directly from the body in an accurate and non–invasive way. In sports activities, wearable sensors allow athletes to monitor performance and body movements objectively, going beyond the coach’s subjective evaluation limits. The main goal of this review paper is to provide a comprehensive overview of wearable technologies and sensing systems to detect and monitor the physiological parameters of patients during post–operative rehabilitation and athletes’ training, and to present evidence that supports the efficacy of this technology for healthcare applications. First, a classification of the human physiological parameters acquired from the human body by sensors attached to sensitive skin locations or worn as a part of garments is introduced, carrying important feedback on the user’s health status. Then, a detailed description of the electromechanical transduction mechanisms allows a comparison of the technologies used in wearable applications to monitor sports and rehabilitation activities. This paves the way for an analysis of wearable technologies, providing a comprehensive comparison of the current state of the art of available sensors and systems. Comparative and statistical analyses are provided to point out useful insights for defining the best technologies and solutions for monitoring body movements. Lastly, the presented review is compared with similar ones reported in the literature to highlight its strengths and novelties

    Towards Automatic Modelling of Volleyball Players' Behavior for Analysis, Feedback and Hybrid Training

    Get PDF
    Automatic tagging of video recordings of sports matches and training sessions can be helpful to coaches and players and provide access to structured data at a scale that would be unfeasible if one were to rely on manual tagging. Recognition of different actions forms an essential part of sports video tagging. In this paper, the authors employ machine learning techniques to automatically recognize specific types of volleyball actions (i.e., underhand serve, overhead pass, serve, forearm pass, one hand pass, smash, and block which are manually annotated) during matches and training sessions (uncontrolled, in the wild data) based on motion data captured by inertial measurement unit sensors strapped on the wrists of eight female volleyball players. Analysis of the results suggests that all sensors in the inertial measurement unit (i.e., magnetometer, accelerometer, barometer, and gyroscope) contribute unique information in the classification of volleyball actions types. The authors demonstrate that while the accelerometer feature set provides better results than other sensors, overall (i.e., gyroscope, magnetometer, and barometer) feature fusion of the accelerometer, magnetometer, and gyroscope provides the bests results (unweighted average recall = 67.87%, unweighted average precision = 68.68%, and κ = .727), well above the chance level of 14.28%. Interestingly, it is also demonstrated that the dominant hand (unweighted average recall = 61.45%, unweighted average precision = 65.41%, and κ = .652) provides better results than the nondominant (unweighted average recall = 45.56%, unweighted average precision = 55.45, and κ = .553) hand. Apart from machine learning models, this paper also discusses a modular architecture for a system to automatically supplement video recording by detecting events of interests in volleyball matches and training sessions and to provide tailored and interactive multimodal feedback by utilizing an HTML5/JavaScript application. A proof of concept prototype developed based on this architecture is also described

    A Survey of Deep Learning in Sports Applications: Perception, Comprehension, and Decision

    Full text link
    Deep learning has the potential to revolutionize sports performance, with applications ranging from perception and comprehension to decision. This paper presents a comprehensive survey of deep learning in sports performance, focusing on three main aspects: algorithms, datasets and virtual environments, and challenges. Firstly, we discuss the hierarchical structure of deep learning algorithms in sports performance which includes perception, comprehension and decision while comparing their strengths and weaknesses. Secondly, we list widely used existing datasets in sports and highlight their characteristics and limitations. Finally, we summarize current challenges and point out future trends of deep learning in sports. Our survey provides valuable reference material for researchers interested in deep learning in sports applications
    corecore