32,857 research outputs found

    Player detection in field sports

    Get PDF
    We describe a method for player detection in field sports with a fixed camera set-up based on a new player feature extraction strategy. The proposed method detects players in static images with a sliding window technique. First, we compute a binary edge image and then the detector window is shifted over the edge regions. Given a set of binary edges in a sliding window, we introduce and solve a particular diffusion equation to generate a shape information image. The proposed diffusion to generate a shape information image is the key stage and the main theoretical contribution in our new algorithm. It removes the appearance variations of an object while preserving the shape information. It also enables the use of polar and Fourier transforms in the next stage to achieve scale and rotation invariant feature extraction. A Support Vector Machine (SVM) classifier is used to assign either player or non-player class inside a detector window. We evaluate our approach on three different field hockey datasets. In general, results show that the proposed feature extraction is effective, and performs competitive results compared to the state-of-the-art methods

    Basketball game analyzing based on computer vision

    Get PDF
    As tremendous improvement in computer vision technology, various industries start to apply computer vision to analyze huge multimedia content. Sports as one of the biggest resource invested industries also step up to utilize this technology to enhance their sports intelligent products. The thesis is following this development to provide prototype implementations of computer vision algorithms in sports industry. Main objective is to develop initial algorithms to solve play-field detection and player tracking in basketball game video. Play-field detection is an important task in sports video content analysis, as it provides the foundation for further operations such as object detection, object tracking or semantic event highlight and summarization. On the other hand, player tracking highlight player movements in critical events in basketball game. It is also a challenging task to develop effective and efficient player tracking in basketball video, due to factors such as pose variation, illumination change, occlusion, and motion blur. This thesis proposed reliable and efficient prototype algorithms to address play- field detection and single player tracking. SURF algorithm is utilized and modified to offer precise location of play-field and overlay trajectory data to improve viewer’s experience on sports product. And compressive tracking algorithm implemented for the aim of capture and track single player in important events to reveal player’s secret tactics. Prototype implementation to meet the current needs in basketball video content analyzing field

    Improving Object Detection Quality in Football Through Super-Resolution Techniques

    Full text link
    This study explores the potential of super-resolution techniques in enhancing object detection accuracy in football. Given the sport's fast-paced nature and the critical importance of precise object (e.g. ball, player) tracking for both analysis and broadcasting, super-resolution could offer significant improvements. We investigate how advanced image processing through super-resolution impacts the accuracy and reliability of object detection algorithms in processing football match footage. Our methodology involved applying state-of-the-art super-resolution techniques to a diverse set of football match videos from SoccerNet, followed by object detection using Faster R-CNN. The performance of these algorithms, both with and without super-resolution enhancement, was rigorously evaluated in terms of detection accuracy. The results indicate a marked improvement in object detection accuracy when super-resolution preprocessing is applied. The improvement of object detection through the integration of super-resolution techniques yields significant benefits, especially for low-resolution scenarios, with a notable 12\% increase in mean Average Precision (mAP) at an IoU (Intersection over Union) range of 0.50:0.95 for 320x240 size images when increasing the resolution fourfold using RLFN. As the dimensions increase, the magnitude of improvement becomes more subdued; however, a discernible improvement in the quality of detection is consistently evident. Additionally, we discuss the implications of these findings for real-time sports analytics, player tracking, and the overall viewing experience. The study contributes to the growing field of sports technology by demonstrating the practical benefits and limitations of integrating super-resolution techniques in football analytics and broadcasting

    Spatial movement pattern recognition in soccer based on relative player movements

    Get PDF
    Knowledge of spatial movement patterns in soccer occurring on a regular basis can give a soccer coach, analyst or reporter insights in the playing style or tactics of a group of players or team. Furthermore, it can support a coach to better prepare for a soccer match by analysing (trained) movement patterns of both his own as well as opponent players. We explore the use of the Qualitative Trajectory Calculus (QTC), a spatiotemporal qualitative calculus describing the relative movement between objects, for spatial movement pattern recognition of players movements in soccer. The proposed method allows for the recognition of spatial movement patterns that occur on different parts of the field and/or at different spatial scales. Furthermore, the Levenshtein distance metric supports the recognition of similar movements that occur at different speeds and enables the comparison of movements that have different temporal lengths. We first present the basics of the calculus, and subsequently illustrate its applicability with a real soccer case. To that end, we present a situation where a user chooses the movements of two players during 20 seconds of a real soccer match of a 2016-2017 professional soccer competition as a reference fragment. Following a pattern matching procedure, we describe all other fragments with QTC and calculate their distance with the QTC representation of the reference fragment. The top-k most similar fragments of the same match are presented and validated by means of a duo-trio test. The analyses show the potential of QTC for spatial movement pattern recognition in soccer

    Team behaviour analysis in sports using the poisson equation

    Get PDF
    We propose a novel physics-based model for analysing team play- ers’ positions and movements on a sports playing field. The goal is to detect for each frame the region with the highest population of a given team’s players and the region towards which the team is moving as they press for territorial advancement, termed the region of intent. Given the positions of team players from a plan view of the playing field at any given time, we solve a particular Poisson equation to generate a smooth distribution. The proposed distribu- tion provides the likelihood of a point to be occupied by players so that more highly populated regions can be detected by appropriate thresholding. Computing the proposed distribution for each frame provides a sequence of distributions, which we process to detect the region of intent at any time during the game. Our model is evalu- ated on a field hockey dataset, and results show that the proposed approach can provide effective features that could be used to gener- ate team statistics useful for performance evaluation or broadcasting purposes

    Semantic analysis of field sports video using a petri-net of audio-visual concepts

    Get PDF
    The most common approach to automatic summarisation and highlight detection in sports video is to train an automatic classifier to detect semantic highlights based on occurrences of low-level features such as action replays, excited commentators or changes in a scoreboard. We propose an alternative approach based on the detection of perception concepts (PCs) and the construction of Petri-Nets which can be used for both semantic description and event detection within sports videos. Low-level algorithms for the detection of perception concepts using visual, aural and motion characteristics are proposed, and a series of Petri-Nets composed of perception concepts is formally defined to describe video content. We call this a Perception Concept Network-Petri Net (PCN-PN) model. Using PCN-PNs, personalized high-level semantic descriptions of video highlights can be facilitated and queries on high-level semantics can be achieved. A particular strength of this framework is that we can easily build semantic detectors based on PCN-PNs to search within sports videos and locate interesting events. Experimental results based on recorded sports video data across three types of sports games (soccer, basketball and rugby), and each from multiple broadcasters, are used to illustrate the potential of this framework

    Extracting semantic entities and events from sports tweets

    Get PDF
    Large volumes of user-generated content on practically every major issue and event are being created on the microblogging site Twitter. This content can be combined and processed to detect events, entities and popular moods to feed various knowledge-intensive practical applications. On the downside, these content items are very noisy and highly informal, making it difficult to extract sense out of the stream. In this paper, we exploit various approaches to detect the named entities and significant micro-events from users’ tweets during a live sports event. Here we describe how combining linguistic features with background knowledge and the use of Twitter-specific features can achieve high, precise detection results (f-measure = 87%) in different datasets. A study was conducted on tweets from cricket matches in the ICC World Cup in order to augment the event-related non-textual media with collective intelligence

    Automatic camera selection for activity monitoring in a multi-camera system for tennis

    Get PDF
    In professional tennis training matches, the coach needs to be able to view play from the most appropriate angle in order to monitor players' activities. In this paper, we describe and evaluate a system for automatic camera selection from a network of synchronised cameras within a tennis sporting arena. This work combines synchronised video streams from multiple cameras into a single summary video suitable for critical review by both tennis players and coaches. Using an overhead camera view, our system automatically determines the 2D tennis-court calibration resulting in a mapping that relates a player's position in the overhead camera to their position and size in another camera view in the network. This allows the system to determine the appearance of a player in each of the other cameras and thereby choose the best view for each player via a novel technique. The video summaries are evaluated in end-user studies and shown to provide an efficient means of multi-stream visualisation for tennis player activity monitoring
    • 

    corecore