18 research outputs found

    Analyse de trajectoires pour l'indexation sémantique des vidéos à grande échelle

    Get PDF
    National audienceL'indexation sémantique automatique de grandes collections vidéo est un problème complexe qui ne peut se limiter à l'analyse de mots clefs saisis par les utilisateurs. L'extraction de descripteurs spatiaux et temporels à partir du contenu est devenu indispensable pour appréhender la grande richesse des contenus. Cependant, le coût de calcul associé à l'extraction de descriptions temporelles est bloquant et les systèmes actuels se limitent souvent au traitement de l'information spatiale. Dans ces travaux, nous proposons une méthode d'analyse de trajectoires de points caractéristiques. Ses besoins en ressources de calcul sont faibles et s'adaptent facilement à de grandes collections vidéo. Partant d'une description des trajectoires de points d'intérêt utilisant l'analyse du flot optique, une batterie de descripteurs basés sur les modèles Sac de Mots sont calculés pour représenter les vidéos. L'approche est évaluée dans le contexte difficile du challenge TRECVid Semantic INdexing (SIN)

    Retina-Enhanced SURF Descriptors for Semantic Concept Detection in Videos

    Get PDF
    International audienceThis paper proposes to investigate the potential benefit of the use of low-level human vision behaviors in the context of high-level semantic concept detection. A large part of the current approaches relies on the Bag-of-Words (BoW) model, which has proven itself to be a good choice especially for object recognition in images. Its extension from static images to video sequences exhibits some new problems to cope with, mainly the way to use the added temporal dimension for detecting the target concepts (swimming, drinking...). In this study, we propose to apply a human retina model to preprocess video sequences, before constructing a State-Of-The-Art BoW analysis. This preprocessing, designed in a way that enhances the appearance especially of static image elements, increases the performance by introducing robustness to traditional image and video problems, such as luminance variation, shadows, compression artifacts and noise. These approaches are valuated on the TrecVid 2010 Semantic Indexing task datasets, containing 130 high-level semantic concepts. We consider the well-known SURF descriptor as the entry point of the BoW system, but this work could be extended to any other local gradient based descriptor

    Hierarchical Late Fusion for Concept Detection in Videos

    Get PDF
    Oral session 1: WS21 - Workshop on Information Fusion in Computer Vision for Concept RecognitionInternational audienceWe deal with the issue of combining dozens of classifiers into a better one, for concept detection in videos. We compare three fusion approaches that share a common structure: they all start with a classifier clustering stage, continue with an intra-cluster fusion and end with an inter-cluster fusion. The main difference between them comes from the first stage. The first approach relies on a priori knowledge about the internals of each classifier (low-level descriptors and classification algorithm) to group the set of available classifiers by similarity. The second and third approaches obtain classifier similarity measures directly from their output and group them using agglomerative clustering for the second approach and community detection for the third one

    IRIM at TRECVID 2011: Semantic Indexing and Instance Search

    Get PDF
    12 pages - TRECVID workshop notebook papers/slides available at http://www-nlpir.nist.gov/projects/tvpubs/tv.pubs.org.htmlInternational audienceThe IRIM group is a consortium of French teams work- ing on Multimedia Indexing and Retrieval. This paper describes its participation to the TRECVID 2011 se- mantic indexing and instance search tasks. For the semantic indexing task, our approach uses a six-stages processing pipelines for computing scores for the likeli- hood of a video shot to contain a target concept. These scores are then used for producing a ranked list of im- ages or shots that are the most likely to contain the tar- get concept. The pipeline is composed of the following steps: descriptor extraction, descriptor optimization, classification, fusion of descriptor variants, higher-level fusion, and re-ranking. We evaluated a number of dif- ferent descriptors and tried different fusion strategies. The best IRIM run has a Mean Inferred Average Pre- cision of 0.1387, which ranked us 5th out of 19 partic- ipants. For the instance search task, we we used both object based query and frame based query. We formu- lated the query in standard way as comparison of visual signatures either of object with parts of DB frames or as a comparison of visual signatures of query and DB frames. To produce visual signatures we also used two apporaches: the first one is the baseline Bag-Of-Visual- Words (BOVW) model based on SURF interest point descriptor; the second approach is a Bag-Of-Regions (BOR) model that extends the traditional notion of BOVW vocabulary not only to keypoint-based descrip- tors but to region based descriptors

    IRIM at TRECVID 2013: Semantic Indexing and Instance Search

    Get PDF
    International audienceThe IRIM group is a consortium of French teams working on Multimedia Indexing and Retrieval. This paper describes its participation to the TRECVID 2013 semantic indexing and instance search tasks. For the semantic indexing task, our approach uses a six-stages processing pipelines for computing scores for the likelihood of a video shot to contain a target concept. These scores are then used for producing a ranked list of images or shots that are the most likely to contain the target concept. The pipeline is composed of the following steps: descriptor extraction, descriptor optimization, classiffication, fusion of descriptor variants, higher-level fusion, and re-ranking. We evaluated a number of different descriptors and tried different fusion strategies. The best IRIM run has a Mean Inferred Average Precision of 0.2796, which ranked us 4th out of 26 participants

    IRIM at TRECVID 2012: Semantic Indexing and Instance Search

    Get PDF
    International audienceThe IRIM group is a consortium of French teams work- ing on Multimedia Indexing and Retrieval. This paper describes its participation to the TRECVID 2012 se- mantic indexing and instance search tasks. For the semantic indexing task, our approach uses a six-stages processing pipelines for computing scores for the likeli- hood of a video shot to contain a target concept. These scores are then used for producing a ranked list of im- ages or shots that are the most likely to contain the tar- get concept. The pipeline is composed of the following steps: descriptor extraction, descriptor optimization, classi cation, fusion of descriptor variants, higher-level fusion, and re-ranking. We evaluated a number of dif- ferent descriptors and tried di erent fusion strategies. The best IRIM run has a Mean Inferred Average Pre- cision of 0.2378, which ranked us 4th out of 16 partici- pants. For the instance search task, our approach uses two steps. First individual methods of participants are used to compute similrity between an example image of in- stance and keyframes of a video clip. Then a two-step fusion method is used to combine these individual re- sults and obtain a score for the likelihood of an instance to appear in a video clip. These scores are used to ob- tain a ranked list of clips the most likely to contain the queried instance. The best IRIM run has a MAP of 0.1192, which ranked us 29th on 79 fully automatic runs

    Analiza si interpretarea scenelor vizuale prin abordari colaborative

    No full text
    During the last years, we have witnessed a great increase in the size of digital video collections. Efficient searching and browsing through such collections requires an indexing according to various meaningful terms, bringing us to the focus of this thesis, the automatic semantic indexing of videos. Within this topic, the Bag of Words (BoW) model, often employing SIFT or SURF features, has shown good performance especially on static images. As our first contribution, we propose to improve the results of SIFT/SURF BoW descriptors on videos by pre-processing the videos with a model of the human retina, thereby making these descriptors more robust to video degradations and sensitivite to spatio-temporal information. Our second contribution is a set of BoW descriptors based on trajectories. These give additional motion information, leading to a richer description of the video. Our third contribution, motivated by the availability of complementary descriptors, is a late fusion approach that automatically determines how to combine a large set of descriptors, giving a high increase in the average precision of detected concepts. All the proposed approaches are validated on the TRECVid challenge datasets which focus on visual concept detection in very large and uncontrolled multimedia content.Les dernières années, la taille des collections vidéo a connu une forte augmentation. La recherche et la navigation efficaces dans des telles collections demande une indexation avec des termes pertinents, ce qui nous amène au sujet de cette thèse, l’indexation sémantique des vidéos. Dans ce contexte, le modèle Sac de Mots (BoW), utilisant souvent des caractéristiques SIFT ou SURF, donne de bons résultats sur les images statiques. Notre première contribution est d’améliorer les résultats des descripteurs SIFT/SURF BoW sur les vidéos en pré-traitant les vidéos avec un modèle de rétine humaine, ce qui rend les descripteurs SIFT/SURF BoW plus robustes aux dégradations vidéo et qui leurs donne une sensitivité à l’information spatio-temporelle. Notre deuxième contribution est un ensemble de descripteurs BoW basés sur les trajectoires. Ceux-ci apportent une information de mouvement et contribuent vers une description plus riche des vidéos. Notre troisième contribution, motivée par la disponibilité de descripteurs complémentaires, est une fusion tardive qui détermine automatiquement comment combiner un grand ensemble de descripteurs et améliore significativement la précision moyenne des concepts détectés. Toutes ces approches sont validées sur les bases vidéo du challenge TRECVid, dont le but est la détection de concepts sémantiques visuels dans un contenu multimédia très riche et non contrôlé

    Retina enhanced bag of words descriptors for video classification

    No full text
    International audienceThis paper addresses the task of detecting diverse semantic concepts in videos. Within this context, the Bag Of Visual Words (BoW) model, inherited from sampled video keyframes analysis, is among the most popular methods. However, in the case of image sequences, this model faces new difficulties such as the added motion information, the extra computational cost and the increased variability of content and concepts to handle. Considering this spatio-temporal context, we propose to extend the BoW model by introducing video preprocessing strategies with the help of a retinamodel, before extracting BoW descriptors. This preprocessing increases the robustness of local features to disturbances such as noise and lighting variations. Additionally, the retina model is used to detect potentially salient areas and to construct spatio-temporal descriptors. We experiment with three state of the art local features, SIFT, SURF and FREAK, and we evaluate our results on the TRECVid 2012 Semantic Indexing (SIN) challenge

    Retina enhanced SIFT descriptors for video indexing

    No full text
    International audienceThis paper investigates how the detection of diverse high-level semantic concepts (objects, actions, scene types, persons etc.) in videos can be improved by applying a model of the human retina. A large part of the current approaches for Content-Based Image/Video Retrieval (CBIR/CBVR) relies on the Bag-of-Words (BoW) model, which has shown to perform well especially for object recognition in static images. Nevertheless, the current stateof-the-art framework shows its limits when applied to videos because of the added temporal information. In this paper, we enhance a BoW model based on the classical SIFT local spatial descriptor, by preprocessing videos with a model of the human retina. This retinal preprocessing allows the SIFT descriptor to become aware of temporal information. Our proposed descriptors extend the SIFT genericity to spatio-temporal content, making them interesting for generic video indexing. They also benefit of the retinal spatio-temporal "stability" in the face of various disturbances such as noise, compression artifacts, luminance variations or shadows. The proposed approaches are evaluated on the TRECVID 2012 Semantic Indexing task dataset

    Bags of Trajectory Words for video indexing

    No full text
    International audienceA semantic indexing system capable of detecting both spatial appearance and motion-related semantic concepts requires the use of both spatial and motion descriptors. However, extracting motion descriptors on very large video collections requires great computational resources, which has caused most approaches to limit themselves to a spatial description. This paper explores the use of motion descriptors to complement such spatial descriptions and improve the overall performance of a generic semantic indexing system. We propose a framework for extracting and describing trajectories of tracked points that keeps computational cost manageable, then we construct Bag of Words representations with these trajectories. After supervised classification, a late fusion step combines information from spatial descriptors with that from our proposed Bag of Trajectory Words descriptors to improve overall results. We evaluate our approach in the very difficult context of the TRECVid Semantic Indexing (SIN) dataset
    corecore