723 research outputs found

    Application of Artificial Intelligence in Basketball Sport

    Get PDF
    Basketball is among the most popular sports in the world, and its related industries have also produced huge economic benefits. In recent years, the application of artificial intelligence (AI) technology in basketball has attracted a large amount of attention. We conducted a comprehensive review of the application research of AI in basketball through literature retrieval. Current research focuses on the AI analysis of basketball team and player performance, prediction of competition results, analysis and prediction of shooting, AI coaching system, intelligent training machine and arena, and sports injury prevention. Most studies have shown that AI technology can improve the training level of basketball players, help coaches formulate suitable game strategies, prevent sports injuries, and improve the enjoyment of games. At the same time, it is also found that the number and level of published papers are relatively limited. We believe that the application of AI in basketball is still in its infancy. We call on relevant industries to increase their research investment in this area, and promote the improvement of the level of basketball, making the game increasingly exciting as its worldwide popularity continues to increase

    INTELLIGENT COMPUTER VISION SYSTEM FOR SCORE DETECTION IN BASKETBALL

    Get PDF
    Development of an intelligent computer vision system for Smart IoT basketball training and entertainment includes the development of a range of various subsystems, where score detection subsystem is playing a crucial role. This paper proposes the architecture of such a score detection subsystem to improve reliability and accuracy of the RFID technology used primarily for verification purposes. Challenges encompass both hardware-software interdependencies, optimal camera selection, and cost-effectiveness considerations. Leveraging machine learning algorithms, the vision-based subsystem aims not only to detect scores but also to facilitate online video streaming. Although the use of multiple cameras offers expanded field coverage and heightened precision, it concurrently introduces technical intricacies and increased costs due to image fusion and escalated processing requirements. This research navigates the intricate balance between achieving precise score detection and pragmatic system development. Through precise camera configuration optimization, the proposed system harmonizes hardware and software components

    Multi-Sensory Deep Learning Architectures for Slam Dunk Scene Classification

    Get PDF
    Basketball teams at all levels of the game invest a considerable amount of time and effort into collecting, segmenting, and analysing footage from their upcoming opponents previous games. This analysis helps teams identify and exploit the potential weaknesses of their opponents and is commonly cited as one of the key elements required to achieve success in the modern game. The growing importance of this type of analysis has prompted research into the application of computer vision and audio classification techniques to help teams classify scoring sequences and key events using game footage. However, this research tends to focus on classifying scenes based on information from a single sensory source (visual or audio), and fails to analyse the wealth of multi-sensory information available within the footage. This dissertation aims to demonstrate that by analysing the full range of audio and visual features contained in broadcast game footage through a multi-sensory deep learning architecture one can create a more effective key scene classification system when compared to a single sense model. Additionally, this dissertation explores the performance impact of training the audio component of a multi-sensory architecture using different representations of the audio features

    A framework for automatic semantic video annotation

    Get PDF
    The rapidly increasing quantity of publicly available videos has driven research into developing automatic tools for indexing, rating, searching and retrieval. Textual semantic representations, such as tagging, labelling and annotation, are often important factors in the process of indexing any video, because of their user-friendly way of representing the semantics appropriate for search and retrieval. Ideally, this annotation should be inspired by the human cognitive way of perceiving and of describing videos. The difference between the low-level visual contents and the corresponding human perception is referred to as the ‘semantic gap’. Tackling this gap is even harder in the case of unconstrained videos, mainly due to the lack of any previous information about the analyzed video on the one hand, and the huge amount of generic knowledge required on the other. This paper introduces a framework for the Automatic Semantic Annotation of unconstrained videos. The proposed framework utilizes two non-domain-specific layers: low-level visual similarity matching, and an annotation analysis that employs commonsense knowledgebases. Commonsense ontology is created by incorporating multiple-structured semantic relationships. Experiments and black-box tests are carried out on standard video databases for action recognition and video information retrieval. White-box tests examine the performance of the individual intermediate layers of the framework, and the evaluation of the results and the statistical analysis show that integrating visual similarity matching with commonsense semantic relationships provides an effective approach to automated video annotation

    SPACE-TIME GRAPH-BASED CONVOLUTIONAL NEURAL NETWORKS OF STUDY ON MOVEMENT RECOGNITION OF FOOTBALL PLAYERS

    Get PDF
    Behaviour recognition technology is an interdisciplinary technology, integrating many research achievements in computer vision, deep learning, pattern recognition and other fields. The key information of bone data on human behavior can not only accurately describe the motion posture of the human body in three-dimensional space, but also its rigid connection structure is robust to various external interference factors. However, the behavioral recognition algorithm is influenced by different factors such as background, light and environment, which is easy to lead to unstable recognition accuracy and limited application scenarios. To address this problem, in this paper, we propose a noise filtering algorithm based on data correlation and skeleton energy model filtering, construct a set of football player data sets, using the ST-GCN algorithm to train the skeleton characteristics of football players, and construct a behavior recognition system applied to football players. Finally, by comparing the accuracy of Deep LSTM, 2s-AGCN and the algorithm in this paper, the accuracy of TOP1 and TOP5 is 39.97% and 66.34%, respectively, which are significantly higher than the other two algorithms. It can realize the statistics of athletes and analyze the technical and tactical movements of players on the football field

    Dense trajectories and motion boundary descriptors for action recognition

    Get PDF
    This paper introduces a video representation based on dense trajectories and motion boundary descriptors. Trajectories capture the local motion information of the video. A dense representation guarantees a good coverage of foreground motion as well as of the surrounding context. A state-of-the-art optical flow algorithm enables a robust and efficient extraction of the dense trajectories. As descriptors we extract features aligned with the trajectories to characterize shape (point coordinates), appearance (histograms of oriented gradients) and motion (histograms of optical flow). Additionally, we introduce a descriptor based on motion boundary histograms (MBH) which rely on differential optical flow. The MBH descriptor shows to consistently outperform other state-of-the-art descriptors, in particular on real-world videos that contain a significant amount of camera motion. We evaluate our video representation in the context of action classification on eight datasets, namely KTH, YouTube, Hollywood2, UCF sports, IXMAS, UIUC, Olympic Sports and UCF50. On all datasets our approach outperforms current state-of-the-art results
    corecore