39 research outputs found

    An automatic analyzer for sports video databases using visual cues and real-world modeling

    Get PDF
    With the advent of hard-disk video recording, video databases gradually emerge for consumer applications. The large capacity of disks requires the need for fast storage and retrieval functions. We propose a semantic analyzer for sports video, which is able to automatically extract and analyze key events, such as player behavior. The analyzer employs several visual cues and a model for real-world coordinates, so that speed and position of a player can be determined with sufficient accuracy. It consists of four processing steps: (1) playing event detection, (2) court and player segmentation, as well as a 3-D camera model, (3) player tracking, and (4) event-based high-level analysis exploiting visual cues extracted in the real-world. We show attractive experimental results remarking the system efficiency and classification skills

    Soccer Event Retrieval Based on Speech Content: A Vietnamese Case Study

    Get PDF

    Real-time event classification in field sport videos

    Get PDF
    The paper presents a novel approach to real-time event detection in sports broadcasts. We present how the same underlying audio-visual feature extraction algorithm based on new global image descriptors is robust across a range of different sports alleviating the need to tailor it to a particular sport. In addition, we propose and evaluate three different classifiers in order to detect events using these features: a feed-forward neural network, an Elman neural network and a decision tree. Each are investigated and evaluated in terms of their usefulness for real-time event classification. We also propose a ground truth dataset together with an annotation technique for performance evaluation of each classifier useful to others interested in this problem

    Semantic Based Sport Video Browsing

    Get PDF

    A Study On Information Retrieval Systems

    Get PDF
    A video is a key component of today's multimedia applications,  including Video Cassette Recording (VCR), Video-on-Demand (VoD), and virtual walkthrough. This happens supplementary with the fast amplification in video skill (Rynson W.H. Lau et al. 2000). Owing to innovation's progress in the  media, computerized TV, and data frameworks, an immense measure of video information is now exhaustively realistic (Walid G. Aref et al. 2003). The startling advancement in computerized video content has made entrée and moves the data in a tremendous video database a muddled and sensible issue (Chih-Wen Su et al. 2005). Therefore, the necessity for creating devices and frameworks that can effectively investigate the most needed video content, has evoked a great deal of interest among analysts. Sports video has been chosen as the prime application in this proposition since it has attracted viewers around the world

    Event detection in soccer video based on audio/visual keywords

    Get PDF
    Master'sMASTER OF SCIENC

    TagBook: A Semantic Video Representation without Supervision for Event Detection

    Get PDF
    We consider the problem of event detection in video for scenarios where only few, or even zero examples are available for training. For this challenging setting, the prevailing solutions in the literature rely on a semantic video representation obtained from thousands of pre-trained concept detectors. Different from existing work, we propose a new semantic video representation that is based on freely available social tagged videos only, without the need for training any intermediate concept detectors. We introduce a simple algorithm that propagates tags from a video's nearest neighbors, similar in spirit to the ones used for image retrieval, but redesign it for video event detection by including video source set refinement and varying the video tag assignment. We call our approach TagBook and study its construction, descriptiveness and detection performance on the TRECVID 2013 and 2014 multimedia event detection datasets and the Columbia Consumer Video dataset. Despite its simple nature, the proposed TagBook video representation is remarkably effective for few-example and zero-example event detection, even outperforming very recent state-of-the-art alternatives building on supervised representations.Comment: accepted for publication as a regular paper in the IEEE Transactions on Multimedi
    corecore