5 research outputs found

    whu-nercms at trecvid2021:instance search task

    Full text link
    We will make a brief introduction of the experimental methods and results of the WHU-NERCMS in the TRECVID2021 in the paper. This year we participate in the automatic and interactive tasks of Instance Search (INS). For the automatic task, the retrieval target is divided into two parts, person retrieval, and action retrieval. We adopt a two-stage method including face detection and face recognition for person retrieval and two kinds of action detection methods consisting of three frame-based human-object interaction detection methods and two video-based general action detection methods for action retrieval. After that, the person retrieval results and action retrieval results are fused to initialize the result ranking lists. In addition, we make attempts to use complementary methods to further improve search performance. For interactive tasks, we test two different interaction strategies on the fusion results. We submit 4 runs for automatic and interactive tasks respectively. The introduction of each run is shown in Table 1. The official evaluations show that the proposed strategies rank 1st in both automatic and interactive tracks.Comment: 9 pages, 4 figure

    An overview on the evaluated video retrieval tasks at TRECVID 2022

    Full text link
    The TREC Video Retrieval Evaluation (TRECVID) is a TREC-style video analysis and retrieval evaluation with the goal of promoting progress in research and development of content-based exploitation and retrieval of information from digital video via open, tasks-based evaluation supported by metrology. Over the last twenty-one years this effort has yielded a better understanding of how systems can effectively accomplish such processing and how one can reliably benchmark their performance. TRECVID has been funded by NIST (National Institute of Standards and Technology) and other US government agencies. In addition, many organizations and individuals worldwide contribute significant time and effort. TRECVID 2022 planned for the following six tasks: Ad-hoc video search, Video to text captioning, Disaster scene description and indexing, Activity in extended videos, deep video understanding, and movie summarization. In total, 35 teams from various research organizations worldwide signed up to join the evaluation campaign this year. This paper introduces the tasks, datasets used, evaluation frameworks and metrics, as well as a high-level results overview.Comment: arXiv admin note: substantial text overlap with arXiv:2104.13473, arXiv:2009.0998

    Computer vision beyond the visible : image understanding through language

    Get PDF
    In the past decade, deep neural networks have revolutionized computer vision. High performing deep neural architectures trained for visual recognition tasks have pushed the field towards methods relying on learned image representations instead of hand-crafted ones, in the seek of designing end-to-end learning methods to solve challenging tasks, ranging from long-lasting ones such as image classification to newly emerging tasks like image captioning. As this thesis is framed in the context of the rapid evolution of computer vision, we present contributions that are aligned with three major changes in paradigm that the field has recently experienced, namely 1) the power of re-utilizing deep features from pre-trained neural networks for different tasks, 2) the advantage of formulating problems with end-to-end solutions given enough training data, and 3) the growing interest of describing visual data with natural language rather than pre-defined categorical label spaces, which can in turn enable visual understanding beyond scene recognition. The first part of the thesis is dedicated to the problem of visual instance search, where we particularly focus on obtaining meaningful and discriminative image representations which allow efficient and effective retrieval of similar images given a visual query. Contributions in this part of the thesis involve the construction of sparse Bag-of-Words image representations from convolutional features from a pre-trained image classification neural network, and an analysis of the advantages of fine-tuning a pre-trained object detection network using query images as training data. The second part of the thesis presents contributions to the problem of image-to-set prediction, understood as the task of predicting a variable-sized collection of unordered elements for an input image. We conduct a thorough analysis of current methods for multi-label image classification, which are able to solve the task in an end-to-end manner by simultaneously estimating both the label distribution and the set cardinality. Further, we extend the analysis of set prediction methods to semantic instance segmentation, and present an end-to-end recurrent model that is able to predict sets of objects (binary masks and categorical labels) in a sequential manner. Finally, the third part of the dissertation takes insights learned in the previous two parts in order to present deep learning solutions to connect images with natural language in the context of cooking recipes and food images. First, we propose a retrieval-based solution in which the written recipe and the image are encoded into compact representations that allow the retrieval of one given the other. Second, as an alternative to the retrieval approach, we propose a generative model to predict recipes directly from food images, which first predicts ingredients as sets and subsequently generates the rest of the recipe one word at a time by conditioning both on the image and the predicted ingredients.En l'煤ltima d猫cada, les xarxes neuronals profundes han revolucionat el camp de la visi贸 per computador. Els resultats favorables obtinguts amb arquitectures neuronals profundes entrenades per resoldre tasques de reconeixement visual han causat un canvi de paradigma cap al disseny de m猫todes basats en representacions d'imatges apreses de manera autom脿tica, deixant enrere les t猫cniques tradicionals basades en l'enginyeria de representacions. Aquest canvi ha perm猫s l'aparici贸 de t猫cniques basades en l'aprenentatge d'extrem a extrem (end-to-end), capaces de resoldre de manera efectiva molts dels problemes tradicionals de la visi贸 per computador (e.g. classificaci贸 d'imatges o detecci贸 d'objectes), aix铆 com nous problemes emergents com la descripci贸 textual d'imatges (image captioning). Donat el context de la r脿pida evoluci贸 de la visi贸 per computador en el qual aquesta tesi s'emmarca, presentem contribucions alineades amb tres dels canvis m茅s importants que la visi贸 per computador ha experimentat recentment: 1) la reutilitzaci贸 de representacions extretes de models neuronals pre-entrenades per a tasques auxiliars, 2) els avantatges de formular els problemes amb solucions end-to-end entrenades amb grans bases de dades, i 3) el creixent inter猫s en utilitzar llenguatge natural en lloc de conjunts d'etiquetes categ貌riques pre-definits per descriure el contingut visual de les imatges, facilitant aix铆 l'extracci贸 d'informaci贸 visual m茅s enll脿 del reconeixement de l'escena i els elements que la composen La primera part de la tesi est脿 dedicada al problema de la cerca d'imatges (image retrieval), centrada especialment en l'obtenci贸 de representacions visuals significatives i discriminat貌ries que permetin la recuperaci贸 eficient i efectiva d'imatges donada una consulta formulada amb una imatge d'exemple. Les contribucions en aquesta part de la tesi inclouen la construcci贸 de representacions Bag-of-Words a partir de descriptors locals obtinguts d'una xarxa neuronal entrenada per classificaci贸, aix铆 com un estudi dels avantatges d'utilitzar xarxes neuronals per a detecci贸 d'objectes entrenades utilitzant les imatges d'exemple, amb l'objectiu de millorar les capacitats discriminat貌ries de les representacions obtingudes. La segona part de la tesi presenta contribucions al problema de predicci贸 de conjunts a partir d'imatges (image to set prediction), ent猫s com la tasca de predir una col路lecci贸 no ordenada d'elements de longitud variable donada una imatge d'entrada. En aquest context, presentem una an脿lisi exhaustiva dels m猫todes actuals per a la classificaci贸 multi-etiqueta d'imatges, que s贸n capa莽os de resoldre la tasca de manera integral calculant simult脿niament la distribuci贸 probabil铆stica sobre etiquetes i la cardinalitat del conjunt. Seguidament, estenem l'an脿lisi dels m猫todes de predicci贸 de conjunts a la segmentaci贸 d'inst脿ncies sem脿ntiques, presentant un model recurrent capa莽 de predir conjunts d'objectes (representats per m脿scares bin脿ries i etiquetes categ貌riques) de manera seq眉encial. Finalment, la tercera part de la tesi est茅n els coneixements apresos en les dues parts anteriors per presentar solucions d'aprenentatge profund per connectar imatges amb llenguatge natural en el context de receptes de cuina i imatges de plats cuinats. En primer lloc, proposem una soluci贸 basada en algoritmes de cerca, on la recepta escrita i la imatge es codifiquen amb representacions compactes que permeten la recuperaci贸 d'una donada l'altra. En segon lloc, com a alternativa a la soluci贸 basada en algoritmes de cerca, proposem un model generatiu capa莽 de predir receptes (compostes pels seus ingredients, predits com a conjunts, i instruccions) directament a partir d'imatges de menjar.Postprint (published version
    corecore