761,451 research outputs found

    The Lowlands team at TRECVID 2008

    Get PDF
    In this paper we describe our experiments performed for TRECVID 2008. We participated in the High Level Feature extraction and the Search task. For the High Level Feature extraction task we mainly installed our detection environment. In the Search task we applied our new PRFUBE ranking model together with an estimation method which estimates a vital parameter of the model, the probability of a concept occurring in relevant shots. The PRFUBE model has similarities to the well known Probabilistic Text Information Retrieval methodology and follows the Probability Ranking Principle

    Semantic analysis of field sports video using a petri-net of audio-visual concepts

    Get PDF
    The most common approach to automatic summarisation and highlight detection in sports video is to train an automatic classifier to detect semantic highlights based on occurrences of low-level features such as action replays, excited commentators or changes in a scoreboard. We propose an alternative approach based on the detection of perception concepts (PCs) and the construction of Petri-Nets which can be used for both semantic description and event detection within sports videos. Low-level algorithms for the detection of perception concepts using visual, aural and motion characteristics are proposed, and a series of Petri-Nets composed of perception concepts is formally defined to describe video content. We call this a Perception Concept Network-Petri Net (PCN-PN) model. Using PCN-PNs, personalized high-level semantic descriptions of video highlights can be facilitated and queries on high-level semantics can be achieved. A particular strength of this framework is that we can easily build semantic detectors based on PCN-PNs to search within sports videos and locate interesting events. Experimental results based on recorded sports video data across three types of sports games (soccer, basketball and rugby), and each from multiple broadcasters, are used to illustrate the potential of this framework

    CLIPS and NII at TRECvid: Shot segmentation and feature extraction

    No full text
    International audienceThis paper presents the systems used by CLIPS- IMAG laboratory. We participated to shot seg- mentation and high-level extraction tasks. We fo- cus this year on High-Level Features Extraction task, based on key frames classification. We pro- pose an original and promising framework for in- corporating contextual information (from image content) into the concept detection process. The proposed method combines local and global clas- sifiers with stacking, using SVM. We handle topo- logic and semantic contexts in concept detection performance and proposed solutions to handle the large amount of dimensions involved in classified data

    Everyday concept detection in visual lifelogs: validation, relationships and trends

    Get PDF
    The Microsoft SenseCam is a small lightweight wearable camera used to passively capture photos and other sensor readings from a user's day-to-day activities. It can capture up to 3,000 images per day, equating to almost 1 million images per year. It is used to aid memory by creating a personal multimedia lifelog, or visual recording of the wearer's life. However the sheer volume of image data captured within a visual lifelog creates a number of challenges, particularly for locating relevant content. Within this work, we explore the applicability of semantic concept detection, a method often used within video retrieval, on the novel domain of visual lifelogs. A concept detector models the correspondence between low-level visual features and high-level semantic concepts (such as indoors, outdoors, people, buildings, etc.) using supervised machine learning. By doing so it determines the probability of a concept's presence. We apply detection of 27 everyday semantic concepts on a lifelog collection composed of 257,518 SenseCam images from 5 users. The results were then evaluated on a subset of 95,907 images, to determine the precision for detection of each semantic concept. We conduct further analysis on the temporal consistency, co-occurance and trends within the detected concepts to more extensively investigate the robustness of the detectors within this novel domain. We additionally present future applications of concept detection within the domain of lifelogging

    QUBIC: The QU Bolometric Interferometer for Cosmology

    Get PDF
    One of the major challenges of modern cosmology is the detection of B-mode polarization anisotropies in the CMB. These originate from tensor fluctuations of the metric produced during the inflationary phase. Their detection would therefore constitute a major step towards understanding the primordial Universe. The expected level of these anisotropies is however so small that it requires a new generation of instruments with high sensitivity and extremely good control of systematic effects. We propose the QUBIC instrument based on the novel concept of bolometric interferometry, bringing together the sensitivity advantages of bolometric detectors with the systematics effects advantages of interferometry. Methods: The instrument will directly observe the sky through an array of entry horns whose signals will be combined together using an optical combiner. The whole set-up is located inside a cryostat. Polarization modulation will be achieved using a rotating half-wave plate and interference fringes will be imaged on two focal planes (separated by a polarizing grid) tiled with bolometers. We show that QUBIC can be considered as a synthetic imager, exactly similar to a usual imager but with a synthesized beam formed by the array of entry horns. Scanning the sky provides an additional modulation of the signal and improve the sky coverage shape. The usual techniques of map-making and power spectrum estimation can then be applied. We show that the sensitivity of such an instrument is comparable with that of an imager with the same number of horns. We anticipate a low level of beam-related systematics thanks to the fact that the synthesized beam is determined by the location of the primary horns. Other systematics should be under good control thanks to an autocalibration technique, specific to our concept, that will permit the accurate determination of most of the systematics parameters.Comment: 12 pages, 10 figures, submitted to Astronomy and Astrophysic

    Validating the detection of everyday concepts in visual lifelogs

    Get PDF
    The Microsoft SenseCam is a small lightweight wearable camera used to passively capture photos and other sensor readings from a user's day-to-day activities. It can capture up to 3,000 images per day, equating to almost 1 million images per year. It is used to aid memory by creating a personal multimedia lifelog, or visual recording of the wearer's life. However the sheer volume of image data captured within a visual lifelog creates a number of challenges, particularly for locating relevant content. Within this work, we explore the applicability of semantic concept detection, a method often used within video retrieval, on the novel domain of visual lifelogs. A concept detector models the correspondence between low-level visual features and high-level semantic concepts (such as indoors, outdoors, people, buildings, etc.) using supervised machine learning. By doing so it determines the probability of a concept's presence. We apply detection of 27 everyday semantic concepts on a lifelog collection composed of 257,518 SenseCam images from 5 users. The results were then evaluated on a subset of 95,907 images, to determine the precision for detection of each semantic concept and to draw some interesting inferences on the lifestyles of those 5 users. We additionally present future applications of concept detection within the domain of lifelogging. © 2008 Springer Berlin Heidelberg
    corecore