4 research outputs found

    Prototyping a Web-Scale Multimedia Retrieval Service Using Spark

    Get PDF
    International audienceThe world has experienced phenomenal growth in data production and storage in recent years, much of which has taken the form of media files. At the same time, computing power has become abundant with multi-core machines, grids, and clouds. Yet it remains a challenge to harness the available power and move toward gracefully searching and retrieving from web-scale media collections. Several researchers have experimented with using automatically distributed computing frameworks, notably Hadoop and Spark, for processing multimedia material, but mostly using small collections on small computing clusters. In this article, we describe a prototype of a (near) web-scale throughput-oriented MM retrieval service using the Spark framework running on the AWS cloud service. We present retrieval results using up to 43 billion SIFT feature vectors from the public YFCC 100M collection, making this the largest high-dimensional feature vector collection reported in the literature. We also present a publicly available demonstration retrieval system, running on our own servers, where the implementation of the Spark pipelines can be observed in practice using standard image benchmarks, and downloaded for research purposes. Finally, we describe a method to evaluate retrieval quality of the ever-growing high-dimensional index of the prototype, without actually indexing a web-scale media collection

    Data Management for Dynamic Multimedia Analytics and Retrieval

    Get PDF
    Multimedia data in its various manifestations poses a unique challenge from a data storage and data management perspective, especially if search, analysis and analytics in large data corpora is considered. The inherently unstructured nature of the data itself and the curse of dimensionality that afflicts the representations we typically work with in its stead are cause for a broad range of issues that require sophisticated solutions at different levels. This has given rise to a huge corpus of research that puts focus on techniques that allow for effective and efficient multimedia search and exploration. Many of these contributions have led to an array of purpose-built, multimedia search systems. However, recent progress in multimedia analytics and interactive multimedia retrieval, has demonstrated that several of the assumptions usually made for such multimedia search workloads do not hold once a session has a human user in the loop. Firstly, many of the required query operations cannot be expressed by mere similarity search and since the concrete requirement cannot always be anticipated, one needs a flexible and adaptable data management and query framework. Secondly, the widespread notion of staticity of data collections does not hold if one considers analytics workloads, whose purpose is to produce and store new insights and information. And finally, it is impossible even for an expert user to specify exactly how a data management system should produce and arrive at the desired outcomes of the potentially many different queries. Guided by these shortcomings and motivated by the fact that similar questions have once been answered for structured data in classical database research, this Thesis presents three contributions that seek to mitigate the aforementioned issues. We present a query model that generalises the notion of proximity-based query operations and formalises the connection between those queries and high-dimensional indexing. We complement this by a cost-model that makes the often implicit trade-off between query execution speed and results quality transparent to the system and the user. And we describe a model for the transactional and durable maintenance of high-dimensional index structures. All contributions are implemented in the open-source multimedia database system Cottontail DB, on top of which we present an evaluation that demonstrates the effectiveness of the proposed models. We conclude by discussing avenues for future research in the quest for converging the fields of databases on the one hand and (interactive) multimedia retrieval and analytics on the other

    NV-tree : a scalable disk-based high-dimensional index

    Get PDF
    This thesis presents the NV-tree (Nearest Vector tree), which addresses thespecific problem of efficiently and effectively finding the approximatek-nearest neighbors within large collections of high-dimensional data points.The NV-tree is a very compact index, as only six bytes are kept in the in-dex for each high-dimensional descriptor. It thus scales extremely well whenindexing large collections of high-dimensional descriptors. The NV-tree ef-ficiently produces results of good quality, even at such a large scale that theindices can no longer be kept entirely in main memory. We demonstrate thiswith extensive experiments presenting results from various collection sizesfrom 36 million up to nearly 30 billion SIFT (Scale Invariant Feature Trans-form) descriptors.We also study the conditions under which a nearest neighbour search pro-vides meaningful results. Following this analysis we compare the NV-tree toLSH (Locality Sensitive Hashing), the most popular method for -distancesearch, showing that the NV-tree outperforms LSH when it comes to theproblem of nearest neighbour retrieval. Beyond this analysis we also dis-cuss how the NV-tree index can be used in practise in industrial applicationsand address two frequently overlooked requirements: dynamicity—the abil-ity to cope with on-line insertions of new high-dimensional items into theindexed collection—and durability—the ability to recover from crashes andavoid losing the indexed data if a failure occurs. As far as we know, no othernearest neighbor algorithm published so far is able to cope with all threerequirements: scale, dynamicity and durability.Í þessari ritgerð setjum við fram vísinn NV-tré (e. NV-tree) sem lausn áákveðnu afmörkuðu vandamáli: að finna, á hraðvirkan og markvirkan hátt,nálgun áknæstu nágrönnum í stóru safni margvíðra gagnapunkta. NV-tréðer mjög fyrirferðarlítill vísir, þar sem aðeins sex bæti eru geymd fyrir hvernmargvíðan lýsivektor (e. descriptor). NV-tréð skalast því mjög vel þegar þvíer beitt á stór söfn margvíðra lýsivektora. NV-tréð skilar góðum niðurstöðumá skömmum tíma, jafnvel þegar vísarnir komast ekki fyrir í minni. Viðsýnum fram á þetta með niðurstöðum tilrauna á söfnum sem innihalda frá 36milljónum upp í nærri 30 milljarða SIFT (e. Scale Invariant Feature Trans-form) lýsivektora. Við rannsökum einnig þau skilyrði sem þurfa að vera fyrir hendi til að leitað næstu nágrönnum skili merkingarbærum niðurstöðum. Í framhaldi afþeirri greiningu berum við NV-tréð saman við LSH (e. Locality SensitiveHashing), sem er vinsælasta aðferðin fyrir -fjarlægðarleit, og sýnum að NV-tréð er mun hraðvirkara en LSH. Til viðbótar við þessa greiningu ræðumvið hagnýtingu NV-trésins í iðnaði og uppfyllum tvær þarfir sem oft er litiðframhjá: breytileika (e. dynamicity)—getu til að höndla í rauntíma viðbæ-tur við lýsingasafnið—og varanleika (e. durability)—getu til að endurheimtavísinn og forðast gagnatap ef um tölvubilun er að ræða. Að því er við bestvitum, uppfyllir enginn annar þekktur vísir allar þessar þrjár þarfir: skalan-leika, breytileika og varanleika

    A Database Perspective on Large Scale High-Dimensional Indexing

    Get PDF
    This manuscript gives an overview of several years ofresearch aiming at inventing and evaluating systems and techniques forbuilding efficient content-based retrieval systems managing extremelylarge databases of multimedia material. It has focused on themanagement of still images.Most of the work described in this manuscript has been achieved whilekeeping in mind a database perspective. The most important notion thatsheds a particular light on the high-dimensional indexing techniqueswe have been developing is the \emph{mandatory need for secondary storage}. In essence, secondary storage is needed to cope withfailures, which is a requirement for systems running in the realworld. Secondary storage is also needed to handle truly large scaledatasets
    corecore