626 research outputs found

    Listen, Look, and Gotcha: Instant Video Search with Mobile Phones by Layered Audio-Video Indexing *

    Get PDF
    ABSTRACT Mobile video is quickly becoming a mass consumer phenomenon. More and more people are using their smartphones to search and browse video content while on the move. In this paper, we have developed an innovative instant mobile video search system through which users can discover videos by simply pointing their phones at a screen to capture a very few seconds of what they are watching. The system is able to index large-scale video data using a new layered audio-video indexing approach in the cloud, as well as extract light-weight joint audio-video signatures in real time and perform progressive search on mobile devices. Unlike most existing mobile video search applications that simply send the original video query to the cloud, the proposed mobile system is one of the first attempts at instant and progressive video search leveraging the light-weight computing capacity of mobile devices. The system is characterized by four unique properties: 1) a joint audio-video signature to deal with the large aural and visual variances associated with the query video captured by the mobile phone, 2) layered audio-video indexing to holistically exploit the complementary nature of audio and video signals, 3) light-weight fingerprinting to comply with mobile processing capacity, and 4) a progressive query process to significantly reduce computational costs and improve the user experience-the search process can stop anytime once a confident result is achieved. We have collected 1,400 query videos captured by 25 mobile users from a dataset of 600 hours of video. The experiments show that our system outperforms state-of-the-art methods by achieving 90.79% precision when the query video is less than 10 seconds and 70.07% even when the query video is less than 5 seconds. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. The search process can stop anytime once a confident search result is achieved. Thus, the user does not need to wait for a fixed time lag. The proposed system is characterized by its unique features such as layered audio-video indexing, as well as instant and progressive search. Categories and Subject Descriptor

    NV-tree : a scalable disk-based high-dimensional index

    Get PDF
    This thesis presents the NV-tree (Nearest Vector tree), which addresses thespecific problem of efficiently and effectively finding the approximatek-nearest neighbors within large collections of high-dimensional data points.The NV-tree is a very compact index, as only six bytes are kept in the in-dex for each high-dimensional descriptor. It thus scales extremely well whenindexing large collections of high-dimensional descriptors. The NV-tree ef-ficiently produces results of good quality, even at such a large scale that theindices can no longer be kept entirely in main memory. We demonstrate thiswith extensive experiments presenting results from various collection sizesfrom 36 million up to nearly 30 billion SIFT (Scale Invariant Feature Trans-form) descriptors.We also study the conditions under which a nearest neighbour search pro-vides meaningful results. Following this analysis we compare the NV-tree toLSH (Locality Sensitive Hashing), the most popular method for -distancesearch, showing that the NV-tree outperforms LSH when it comes to theproblem of nearest neighbour retrieval. Beyond this analysis we also dis-cuss how the NV-tree index can be used in practise in industrial applicationsand address two frequently overlooked requirements: dynamicity—the abil-ity to cope with on-line insertions of new high-dimensional items into theindexed collection—and durability—the ability to recover from crashes andavoid losing the indexed data if a failure occurs. As far as we know, no othernearest neighbor algorithm published so far is able to cope with all threerequirements: scale, dynamicity and durability.Í þessari ritgerð setjum við fram vísinn NV-tré (e. NV-tree) sem lausn áákveðnu afmörkuðu vandamáli: að finna, á hraðvirkan og markvirkan hátt,nálgun áknæstu nágrönnum í stóru safni margvíðra gagnapunkta. NV-tréðer mjög fyrirferðarlítill vísir, þar sem aðeins sex bæti eru geymd fyrir hvernmargvíðan lýsivektor (e. descriptor). NV-tréð skalast því mjög vel þegar þvíer beitt á stór söfn margvíðra lýsivektora. NV-tréð skilar góðum niðurstöðumá skömmum tíma, jafnvel þegar vísarnir komast ekki fyrir í minni. Viðsýnum fram á þetta með niðurstöðum tilrauna á söfnum sem innihalda frá 36milljónum upp í nærri 30 milljarða SIFT (e. Scale Invariant Feature Trans-form) lýsivektora. Við rannsökum einnig þau skilyrði sem þurfa að vera fyrir hendi til að leitað næstu nágrönnum skili merkingarbærum niðurstöðum. Í framhaldi afþeirri greiningu berum við NV-tréð saman við LSH (e. Locality SensitiveHashing), sem er vinsælasta aðferðin fyrir -fjarlægðarleit, og sýnum að NV-tréð er mun hraðvirkara en LSH. Til viðbótar við þessa greiningu ræðumvið hagnýtingu NV-trésins í iðnaði og uppfyllum tvær þarfir sem oft er litiðframhjá: breytileika (e. dynamicity)—getu til að höndla í rauntíma viðbæ-tur við lýsingasafnið—og varanleika (e. durability)—getu til að endurheimtavísinn og forðast gagnatap ef um tölvubilun er að ræða. Að því er við bestvitum, uppfyllir enginn annar þekktur vísir allar þessar þrjár þarfir: skalan-leika, breytileika og varanleika

    Recent Advances of Local Mechanisms in Computer Vision: A Survey and Outlook of Recent Work

    Full text link
    Inspired by the fact that human brains can emphasize discriminative parts of the input and suppress irrelevant ones, substantial local mechanisms have been designed to boost the development of computer vision. They can not only focus on target parts to learn discriminative local representations, but also process information selectively to improve the efficiency. In terms of application scenarios and paradigms, local mechanisms have different characteristics. In this survey, we provide a systematic review of local mechanisms for various computer vision tasks and approaches, including fine-grained visual recognition, person re-identification, few-/zero-shot learning, multi-modal learning, self-supervised learning, Vision Transformers, and so on. Categorization of local mechanisms in each field is summarized. Then, advantages and disadvantages for every category are analyzed deeply, leaving room for exploration. Finally, future research directions about local mechanisms have also been discussed that may benefit future works. To the best our knowledge, this is the first survey about local mechanisms on computer vision. We hope that this survey can shed light on future research in the computer vision field

    PERICLES Deliverable 4.3:Content Semantics and Use Context Analysis Techniques

    Get PDF
    The current deliverable summarises the work conducted within task T4.3 of WP4, focusing on the extraction and the subsequent analysis of semantic information from digital content, which is imperative for its preservability. More specifically, the deliverable defines content semantic information from a visual and textual perspective, explains how this information can be exploited in long-term digital preservation and proposes novel approaches for extracting this information in a scalable manner. Additionally, the deliverable discusses novel techniques for retrieving and analysing the context of use of digital objects. Although this topic has not been extensively studied by existing literature, we believe use context is vital in augmenting the semantic information and maintaining the usability and preservability of the digital objects, as well as their ability to be accurately interpreted as initially intended.PERICLE

    An Outlook into the Future of Egocentric Vision

    Full text link
    What will the future be? We wonder! In this survey, we explore the gap between current research in egocentric vision and the ever-anticipated future, where wearable computing, with outward facing cameras and digital overlays, is expected to be integrated in our every day lives. To understand this gap, the article starts by envisaging the future through character-based stories, showcasing through examples the limitations of current technology. We then provide a mapping between this future and previously defined research tasks. For each task, we survey its seminal works, current state-of-the-art methodologies and available datasets, then reflect on shortcomings that limit its applicability to future research. Note that this survey focuses on software models for egocentric vision, independent of any specific hardware. The paper concludes with recommendations for areas of immediate explorations so as to unlock our path to the future always-on, personalised and life-enhancing egocentric vision.Comment: We invite comments, suggestions and corrections here: https://openreview.net/forum?id=V3974SUk1

    Spatial and temporal representations for multi-modal visual retrieval

    Get PDF
    This dissertation studies the problem of finding relevant content within a visual collection according to a specific query by addressing three key modalities: symmetric visual retrieval, asymmetric visual retrieval and cross-modal retrieval, depending on the kind of data to be processed. In symmetric visual retrieval, the query object and the elements in the collection are from the same kind of visual data, i.e. images or videos. Inspired by the human visual perception system, we propose new techniques to estimate visual similarity in image-to-image retrieval datasets based on non-metric functions, improving image retrieval performance on top of state-of-the-art methods. On the other hand, asymmetric visual retrieval is the problem in which queries and elements in the dataset are from different types of visual data. We propose methods to aggregate the temporal information of video segments so that imagevideo comparisons can be computed using similarity functions. When compared in image-to-video retrieval datasets, our algorithms drastically reduce memory storage while maintaining high accuracy rates. Finally, we introduce new solutions for cross-modal retrieval, which is the task in which either the queries or the elements in the collection are non-visual objects. In particular, we study text-image retrieval in the domain of art by introducing new models for semantic art understanding, obtaining results close to human performance. Overall, this thesis advances the state-of-the-art in visual retrieval by presenting novel solutions for some of the key tasks in the field. The contributions derived from this work have potential direct applications in the era of big data, as visual datasets are growing exponentially every day and new techniques for storing, accessing and managing large-scale visual collections are required

    Vision-based Person Re-identification in a Queue

    Get PDF

    Data-Driven Representation Learning in Multimodal Feature Fusion

    Get PDF
    abstract: Modern machine learning systems leverage data and features from multiple modalities to gain more predictive power. In most scenarios, the modalities are vastly different and the acquired data are heterogeneous in nature. Consequently, building highly effective fusion algorithms is at the core to achieve improved model robustness and inferencing performance. This dissertation focuses on the representation learning approaches as the fusion strategy. Specifically, the objective is to learn the shared latent representation which jointly exploit the structural information encoded in all modalities, such that a straightforward learning model can be adopted to obtain the prediction. We first consider sensor fusion, a typical multimodal fusion problem critical to building a pervasive computing platform. A systematic fusion technique is described to support both multiple sensors and descriptors for activity recognition. Targeted to learn the optimal combination of kernels, Multiple Kernel Learning (MKL) algorithms have been successfully applied to numerous fusion problems in computer vision etc. Utilizing the MKL formulation, next we describe an auto-context algorithm for learning image context via the fusion with low-level descriptors. Furthermore, a principled fusion algorithm using deep learning to optimize kernel machines is developed. By bridging deep architectures with kernel optimization, this approach leverages the benefits of both paradigms and is applied to a wide variety of fusion problems. In many real-world applications, the modalities exhibit highly specific data structures, such as time sequences and graphs, and consequently, special design of the learning architecture is needed. In order to improve the temporal modeling for multivariate sequences, we developed two architectures centered around attention models. A novel clinical time series analysis model is proposed for several critical problems in healthcare. Another model coupled with triplet ranking loss as metric learning framework is described to better solve speaker diarization. Compared to state-of-the-art recurrent networks, these attention-based multivariate analysis tools achieve improved performance while having a lower computational complexity. Finally, in order to perform community detection on multilayer graphs, a fusion algorithm is described to derive node embedding from word embedding techniques and also exploit the complementary relational information contained in each layer of the graph.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    corecore