898 research outputs found

    Aggregating Local Features into Bundles for High-Precision Object Retrieval

    Get PDF
    Due to the omnipresence of digital cameras and mobile phones the number of images stored in image databases has grown tremendously in the last years. It becomes apparent that new data management and retrieval techniques are needed to deal with increasingly large image databases. This thesis presents new techniques for content-based image retrieval where the image content itself is used to retrieve images by visual similarity from databases. We focus on the query-by-example scenario, assuming the image itself is provided as query to the retrieval engine. In many image databases, images are often associated with metadata, which may be exploited to improve the retrieval performance. In this work, we present a technique that fuses cues from the visual domain and textual annotations into a single compact representation. This combined multimodal representation performs significantly better compared to the underlying unimodal representations, which we demonstrate on two large-scale image databases consisting of up to 10 million images. The main focus of this work is on feature bundling for object retrieval and logo recognition. We present two novel feature bundling techniques that aggregate multiple local features into a single visual description. In contrast to many other works, both approaches encode geometric information about the spatial layout of local features into the corresponding visual description itself. Therefore, these descriptions are highly distinctive and suitable for high-precision object retrieval. We demonstrate the use of both bundling techniques for logo recognition. Here, the recognition is performed by the retrieval of visually similar images from a database of reference images, making the recognition systems easily scalable to a large number of classes. The results show that our retrieval-based methods can successfully identify small objects such as logos with an extremely low false positive rate. In particular, our feature bundling techniques are beneficial because false positives are effectively avoided upfront due to the highly distinctive descriptions. We further demonstrate and thoroughly evaluate the use of our bundling technique based on min-Hashing for image and object retrieval. Compared to approaches based on conventional bag-of-words retrieval, it has much higher efficiency: the retrieved result lists are shorter and cleaner while recall is on equal level. The results suggest that this bundling scheme may act as pre-filtering step in a wide range of scenarios and underline the high effectiveness of this approach. Finally, we present a new variant for extremely fast re-ranking of retrieval results, which ranks the retrieved images according to the spatial consistency of their local features to those of the query image. The demonstrated method is robust to outliers, performs better than existing methods and allows to process several hundreds to thousands of images per second on a single thread

    NLSC: Unrestricted Natural Language-based Service Composition through Sentence Embeddings

    Full text link
    Current approaches for service composition (assemblies of atomic services) require developers to use: (a) domain-specific semantics to formalize services that restrict the vocabulary for their descriptions, and (b) translation mechanisms for service retrieval to convert unstructured user requests to strongly-typed semantic representations. In our work, we argue that effort to developing service descriptions, request translations, and matching mechanisms could be reduced using unrestricted natural language; allowing both: (1) end-users to intuitively express their needs using natural language, and (2) service developers to develop services without relying on syntactic/semantic description languages. Although there are some natural language-based service composition approaches, they restrict service retrieval to syntactic/semantic matching. With recent developments in Machine learning and Natural Language Processing, we motivate the use of Sentence Embeddings by leveraging richer semantic representations of sentences for service description, matching and retrieval. Experimental results show that service composition development effort may be reduced by more than 44\% while keeping a high precision/recall when matching high-level user requests with low-level service method invocations.Comment: This paper will appear on SCC'19 (IEEE International Conference on Services Computing) on July 1

    Convolutional Patch Representations for Image Retrieval: an Unsupervised Approach

    Get PDF
    International audienceConvolutional neural networks (CNNs) have recently received a lot of attention due to their ability to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for un-supervised tasks such as image retrieval has been moderate so far. Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of convolutional descriptors for patch representation , based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval

    Complex query learning in semantic video search

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    MementoMap: A Web Archive Profiling Framework for Efficient Memento Routing

    Get PDF
    With the proliferation of public web archives, it is becoming more important to better profile their contents, both to understand their immense holdings as well as to support routing of requests in Memento aggregators. A memento is a past version of a web page and a Memento aggregator is a tool or service that aggregates mementos from many different web archives. To save resources, the Memento aggregator should only poll the archives that are likely to have a copy of the requested Uniform Resource Identifier (URI). Using the Crawler Index (CDX), we generate profiles of the archives that summarize their holdings and use them to inform routing of the Memento aggregator’s URI requests. Additionally, we use full text search (when available) or sample URI lookups to build an understanding of an archive’s holdings. Previous work in profiling ranged from using full URIs (no false positives, but with large profiles) to using only top-level domains (TLDs) (smaller profiles, but with many false positives). This work explores strategies in between these two extremes. For evaluation we used CDX files from Archive-It, UK Web Archive, Stanford Web Archive Portal, and Arquivo.pt. Moreover, we used web server access log files from the Internet Archive’s Wayback Machine, UK Web Archive, Arquivo.pt, LANL’s Memento Proxy, and ODU’s MemGator Server. In addition, we utilized historical dataset of URIs from DMOZ. In early experiments with various URI-based static profiling policies we successfully identified about 78% of the URIs that were not present in the archive with less than 1% relative cost as compared to the complete knowledge profile and 94% URIs with less than 10% relative cost without any false negatives. In another experiment we found that we can correctly route 80% of the requests while maintaining about 0.9 recall by discovering only 10% of the archive holdings and generating a profile that costs less than 1% of the complete knowledge profile. We created MementoMap, a framework that allows web archives and third parties to express holdings and/or voids of an archive of any size with varying levels of details to fulfil various application needs. Our archive profiling framework enables tools and services to predict and rank archives where mementos of a requested URI are likely to be present. In static profiling policies we predefined the maximum depth of host and path segments of URIs for each policy that are used as URI keys. This gave us a good baseline for evaluation, but was not suitable for merging profiles with different policies. Later, we introduced a more flexible means to represent URI keys that uses wildcard characters to indicate whether a URI key was truncated. Moreover, we developed an algorithm to rollup URI keys dynamically at arbitrary depths when sufficient archiving activity is detected under certain URI prefixes. In an experiment with dynamic profiling of archival holdings we found that a MementoMap of less than 1.5% relative cost can correctly identify the presence or absence of 60% of the lookup URIs in the corresponding archive without any false negatives (i.e., 100% recall). In addition, we separately evaluated archival voids based on the most frequently accessed resources in the access log and found that we could have avoided more than 8% of the false positives without introducing any false negatives. We defined a routing score that can be used for Memento routing. Using a cut-off threshold technique on our routing score we achieved over 96% accuracy if we accept about 89% recall and for a recall of 99% we managed to get about 68% accuracy, which translates to about 72% saving in wasted lookup requests in our Memento aggregator. Moreover, when using top-k archives based on our routing score for routing and choosing only the topmost archive, we missed only about 8% of the sample URIs that are present in at least one archive, but when we selected top-2 archives, we missed less than 2% of these URIs. We also evaluated a machine learning-based routing approach, which resulted in an overall better accuracy, but poorer recall due to low prevalence of the sample lookup URI dataset in different web archives. We contributed various algorithms, such as a space and time efficient approach to ingest large lists of URIs to generate MementoMaps and a Random Searcher Model to discover samples of holdings of web archives. We contributed numerous tools to support various aspects of web archiving and replay, such as MemGator (a Memento aggregator), Inter- Planetary Wayback (a novel archival replay system), Reconstructive (a client-side request rerouting ServiceWorker), and AccessLog Parser. Moreover, this work yielded a file format specification draft called Unified Key Value Store (UKVS) that we use for serialization and dissemination of MementoMaps. It is a flexible and extensible file format that allows easy interactions with Unix text processing tools. UKVS can be used in many applications beyond MementoMaps

    Data and knowledge engineering for medical image and sensor data

    Get PDF

    Trade-off among timeliness, messages and accuracy for large-Ssale information management

    Get PDF
    The increasing amount of data and the number of nodes in large-scale environments require new techniques for information management. Examples of such environments are the decentralized infrastructures of Computational Grid and Computational Cloud applications. These large-scale applications need different kinds of aggregated information such as resource monitoring, resource discovery or economic information. The challenge of providing timely and accurate information in large scale environments arise from the distribution of the information. Reasons for delays in distributed information system are a long information transmission time due to the distribution, churn and failures. A problem of large applications such as peer-to-peer (P2P) systems is the increasing retrieval time of the information due to the decentralization of the data and the failure proneness. However, many applications need a timely information provision. Another problem is an increasing network consumption when the application scales to millions of users and data. Using approximation techniques allows reducing the retrieval time and the network consumption. However, the usage of approximation techniques decreases the accuracy of the results. Thus, the remaining problem is to offer a trade-off in order to solve the conflicting requirements of fast information retrieval, accurate results and low messaging cost. Our goal is to reach a self-adaptive decision mechanism to offer a trade-off among the retrieval time, the network consumption and the accuracy of the result. Self-adaption enables distributed software to modify its behavior based on changes in the operating environment. In large-scale information systems that use hierarchical data aggregation, we apply self-adaptation to control the approximation used for the information retrieval and reduces the network consumption and the retrieval time. The hypothesis of the thesis is that approximation techniquescan reduce the retrieval time and the network consumption while guaranteeing an accuracy of the results, while considering user’s defined priorities. First, this presented research addresses the problem of a trade-off among a timely information retrieval, accurate results and low messaging cost by proposing a summarization algorithm for resource discovery in P2P-content networks. After identifying how summarization can improve the discovery process, we propose an algorithm which uses a precision-recall metric to compare the accuracy and to offer a user-driven trade-off. Second, we propose an algorithm that applies a self-adaptive decision making on each node. The decision is about the pruning of the query and returning the result instead of continuing the query. The pruning reduces the retrieval time and the network consumption at the cost of a lower accuracy in contrast to continuing the query. The algorithm uses an analytic hierarchy process to assess the user’s priorities and to propose a trade-off in order to satisfy the accuracy requirements with a low message cost and a short delay. A quantitative analysis evaluates our presented algorithms with a simulator, which is fed with real data of a network topology and the nodes’ attributes. The usage of a simulator instead of the prototype allows the evaluation in a large scale of several thousands of nodes. The algorithm for content summarization is evaluated with half a million of resources and with different query types. The selfadaptive algorithm is evaluated with a simulator of several thousands of nodes that are created from real data. A qualitative analysis addresses the integration of the simulator’s components in existing market frameworks for Computational Grid and Cloud applications. The proposed content summarization algorithm reduces the information retrieval time from a logarithmic increase to a constant factor. Furthermore, the message size is reduced significantly by applying the summarization technique. For the user, a precision-recall metric allows defining the relation between the retrieval time and the accuracy. The self-adaptive algorithm reduces the number of messages needed from an exponential increase to a constant factor. At the same time, the retrieval time is reduced to a constant factor under an increasing number of nodes. Finally, the algorithm delivers the data with the required accuracy adjusting the depth of the query according to the network conditions.La gestió de la informació exigeix noves tècniques que tractin amb la creixent quantitat de dades i nodes en entorns a gran escala. Alguns exemples d’aquests entorns són les infraestructures descentralitzades de Computacional Grid i Cloud. Les aplicacions a gran escala necessiten diferents classes d’informació agregada com monitorització de recursos i informació econòmica. El desafiament de proporcionar una provisió ràpida i acurada d’informació en ambients de grans escala sorgeix de la distribució de la informació. Una raó és que el sistema d’informació ha de tractar amb l’adaptabilitat i fracassos d’aquests ambients. Un problema amb aplicacions molt grans com en sistemes peer-to-peer (P2P) és el creixent temps de recuperació de l’informació a causa de la descentralització de les dades i la facilitat al fracàs. No obstant això, moltes aplicacions necessiten una provisió d’informació puntual. A més, alguns usuaris i aplicacions accepten inexactituds dels resultats si la informació es reparteix a temps. A més i més, el consum de xarxa creixent fa que sorgeixi un altre problema per l’escalabilitat del sistema. La utilització de tècniques d’aproximació permet reduir el temps de recuperació i el consum de xarxa. No obstant això, l’ús de tècniques d’aproximació disminueix la precisió dels resultats. Així, el problema restant és oferir un compromís per resoldre els requisits en conflicte d’extracció de la informació ràpida, resultats acurats i cost d’enviament baix. El nostre objectiu és obtenir un mecanisme de decisió completament autoadaptatiu per tal d’oferir el compromís entre temps de recuperació, consum de xarxa i precisió del resultat. Autoadaptacío permet al programari distribuït modificar el seu comportament en funció dels canvis a l’entorn d’operació. En sistemes d’informació de gran escala que utilitzen agregació de dades jeràrquica, l’auto-adaptació permet controlar l’aproximació utilitzada per a l’extracció de la informació i redueixen el consum de xarxa i el temps de recuperació. La hipòtesi principal d’aquesta tesi és que els tècniques d’aproximació permeten reduir el temps de recuperació i el consum de xarxa mentre es garanteix una precisió adequada definida per l’usari. La recerca que es presenta, introdueix un algoritme de sumarització de continguts per a la descoberta de recursos a xarxes de contingut P2P. Després d’identificar com sumarització pot millorar el procés de descoberta, proposem una mètrica que s’utilitza per comparar la precisió i oferir un compromís definit per l’usuari. Després, introduïm un algoritme nou que aplica l’auto-adaptació a un ordre per satisfer els requisits de precisió amb un cost de missatge baix i un retard curt. Basat en les prioritats d’usuari, l’algoritme troba automàticament un compromís. L’anàlisi quantitativa avalua els algoritmes presentats amb un simulador per permetre l’evacuació d’uns quants milers de nodes. El simulador s’alimenta amb dades d’una topologia de xarxa i uns atributs dels nodes reals. L’algoritme de sumarització de contingut s’avalua amb mig milió de recursos i amb diferents tipus de sol·licituds. L’anàlisi qualitativa avalua la integració del components del simulador en estructures de mercat existents per a aplicacions de Computacional Grid i Cloud. Així, la funcionalitat implementada del simulador (com el procés d’agregació i la query language) és comprovada per la integració de prototips. L’algoritme de sumarització de contingut proposat redueix el temps d’extracció de l’informació d’un augment logarítmic a un factor constant. A més, també permet que la mida del missatge es redueix significativament. Per a l’usuari, una precision-recall mètric permet definir la relació entre el nivell de precisió i el temps d’extracció de la informació. Alhora, el temps de recuperació es redueix a un factor constant sota un nombre creixent de nodes. Finalment, l’algoritme reparteix les dades amb la precisió exigida i ajusta la profunditat de la sol·licitud segons les condicions de xarxa. Els algoritmes introduïts són prometedors per ser utilitzats per l’agregació d’informació en nous sistemes de gestió de la informació de gran escala en el futur

    Concept discovery and argument bundles in the web of experiences /

    Get PDF
    Milions de persones interactuen i comparteixen informació cada dia a la Web. Des de converses intranscendents fins a comentaris sobre productes en tendes online, el contingut generat per les persones a la Web és enorme i divers. I entre aquests continguts n'hi ha un particularment interessant: les experiències. La gent escolta, llegeix i considera les experiències d'altri abans de prendre decisions, com per exemple comprar-se una càmera digital o anar de viatge a algun lloc exòtic. I quan aquestes experiències estan guardades a la Web, són accessibles per un gran nombre de persones. Malauradament, aquest contingut no és fàcilment accessible: Una persona interessada en anar-se'n de vacances a un hotel probablement llegirà unes quantes experiències d'altra gent que ha anat prèviament a aquest hotel per descobrir que n'opinen, però segurament no podrà llegir-les totes. D'aquesta manera ignorarà informació i experiències d'altra gent que li haurien estat profitoses de cara al seu viatge. Així doncs, considerem que hi ha una clara necessitat d'un anàlisis més profund d'aquesta informació continguda en les experiències de persones, per facilitar-ne el seu ús. El nostre enfocament es troba en el context de la Web de les Experiències, i es basa en extreure i adquirir coneixement pràctic d'experiències individuals respecte entitats del món real expressades en forma textual. A més a més, aquest coneixement han de ser tractat i representat de manera que permeti la seva reutilització per altres persones amb diferents interessos i preferències. Aquest procés està dividit en tres parts: Primer, extraiem les paraules més important utilitzades en les experiències de les diferents persones per descriure opinions sobre entitats. Seguidament, utilitzant el conjunt de paraules extretes, les agrupem en conceptes i creem un vocabulari de conceptes, que ens ajuda a descobrir els aspectes més importants de cada entitat segons les experiències viscudes per cada individu. Finalment, utilitzant el vocabulari de conceptes, creem els aplecs d'arguments de cada entitat. Els aplecs d'arguments caracteritzen els pros i els cons de cada entitat, i agreguen el coneixement pràctic extret de les experiències escrites per cada individu. A més amés, demostrem que els aplecs d'arguments, a part de ser útils per a representar el coneixement pràctic de les experiències, permeten fer inferència sobre quina entitat és la més adequada per cada individual, considerant el conjunt de preferències de cada individu. En aquesta tesi avaluem els aplecs d'arguments amb les puntuacions dels productes d'Amazon i les caracteritzacions de les càmeres de Dpreview, una web especialitzada en càmeres digitals. Demostrem com els arguments pro i con dels nostres aplecs d'arguments són molt semblants als presentats a Dpreview, fet que demostra la qualitat dels aplecs creats. Finalment, demostrem que les classificacions (rankings) de productes obtinguts amb la nostra implementació i els de Dpreview són molt semblants, mentre que la classificació donada per Amazon no s'assembla a cap de les altres dues.Millions of people interact and share interesting information every day in the Social Web. From daily conversations to comments about products in e-commerce sites, the content generated by people in these sites is huge and diverse. Among the wide diversity of user-contributed content on the web, there is a particular kind that has the potential of being put to good use by intelligent systems: human experiences. People very often use other people's experiences before making decisions, and when these kind of human experiences are expressed and recorded on the web, they can be shared with by large number of people. Nevertheless sometimes this content is not easily accessible, so a person trying to book a hotel may read a few reviews over a few hotels - but cannot possibly read them all. There is a clear need for an in-depth analysis of this kind of information, based on textual expressions of human particular experiences. Our approach, in the framework of the Web of Experiences, aims at acquiring practical knowledge from individual experiences with entities in the real world expressed in textual form. Moreover, this knowledge has to be represented in a way that facilitates the reuse of the experiential knowledge by other individuals with different preferences. Our approach has three stages: First, we extract the most salient set of aspects used by the individuals to describe their experiences with the entities in a domain. Second, using the set of extracted aspects, we group them in concepts to create a concept vocabulary that models the set of issues addressed in the reviews. Third, using the vocabulary of concepts, we create a bundle of arguments for each entity. An argument bundle characterizes the pros and cons of an entity, aggregating practical knowledge from judgments written by individuals with different biases and preferences. Moreover, we show how argument bundles allow us to define the notions of user query and the satisfaction degree of a bundle by a user query, proving that argument bundles are not only capable of representing practical knowledge but they are also useful to perform inference given a set of user preferences specified in a query. We evaluate the argument bundles of our approach with the Amazon score ratings and the camera characterizations of Dpreview. We show that pro and con arguments are very close to those listed in Dpreview. Evaluating entity rankings, we show that Dpreview and our approach give congruent rankings, while Amazon's is not congruent neither with Dpreview's or ours

    Concept discovery and argument bundles in the web of experiences

    Get PDF
    Milions de persones interactuen i comparteixen informació cada dia a la Web. Des de converses intranscendents fins a comentaris sobre productes en tendes online, el contingut generat per les persones a la Web és enorme i divers. I entre aquests continguts n’hi ha un particularment interessant: les experiències. La gent escolta, llegeix i considera les experiències d’altri abans de prendre decisions, com per exemple comprar-se una càmera digital o anar de viatge a algun lloc exòtic. I quan aquestes experiències estan guardades a la Web, són accessibles per un gran nombre de persones. Malauradament, aquest contingut no és fàcilment accessible: Una persona interessada en anar-se’n de vacances a un hotel probablement llegirà unes quantes experiències d’altra gent que ha anat prèviament a aquest hotel per descobrir que n’opinen, però segurament no podrà llegir-les totes. D’aquesta manera ignorarà informació i experiències d’altra gent que li haurien estat profitoses de cara al seu viatge. Així doncs, considerem que hi ha una clara necessitat d’un anàlisis més profund d’aquesta informació continguda en les experiències de persones, per facilitar-ne el seu ús. El nostre enfocament es troba en el context de la Web de les Experiències, i es basa en extreure i adquirir coneixement pràctic d’experiències individuals respecte entitats del món real expressades en forma textual. A més a més, aquest coneixement han de ser tractat i representat de manera que permeti la seva reutilització per altres persones amb diferents interessos i preferències. Aquest procés està dividit en tres parts: Primer, extraiem les paraules més important utilitzades en les experiències de les diferents persones per descriure opinions sobre entitats. Seguidament, utilitzant el conjunt de paraules extretes, les agrupem en conceptes i creem un vocabulari de conceptes, que ens ajuda a descobrir els aspectes més importants de cada entitat segons les experiències viscudes per cada individu. Finalment, utilitzant el vocabulari de conceptes, creem els aplecs d’arguments de cada entitat. Els aplecs d’arguments caracteritzen els pros i els cons de cada entitat, i agreguen el coneixement pràctic extret de les experiències escrites per cada individu. A més amés, demostrem que els aplecs d’arguments, a part de ser útils per a representar el coneixement pràctic de les experiències, permeten fer inferència sobre quina entitat és la més adequada per cada individual, considerant el conjunt de preferències de cada individu. En aquesta tesi avaluem els aplecs d’arguments amb les puntuacions dels productes d’Amazon i les caracteritzacions de les càmeres de Dpreview, una web especialitzada en càmeres digitals. Demostrem com els arguments pro i con dels nostres aplecs d’arguments són molt semblants als presentats a Dpreview, fet que demostra la qualitat dels aplecs creats. Finalment, demostrem que les classificacions (rankings) de productes obtinguts amb la nostra implementació i els de Dpreview són molt semblants, mentre que la classificació donada per Amazon no s’assembla a cap de les altres dues.Millions of people interact and share interesting information every day in the Social Web. From daily conversations to comments about products in e-commerce sites, the content generated by people in these sites is huge and diverse. Among the wide diversity of user-contributed content on the web, there is a particular kind that has the potential of being put to good use by intelligent systems: human experiences. People very often use other people's experiences before making decisions, and when these kind of human experiences are expressed and recorded on the web, they can be shared with by large number of people. Nevertheless sometimes this content is not easily accessible, so a person trying to book a hotel may read a few reviews over a few hotels - but cannot possibly read them all. There is a clear need for an in-depth analysis of this kind of information, based on textual expressions of human particular experiences. Our approach, in the framework of the Web of Experiences, aims at acquiring practical knowledge from individual experiences with entities in the real world expressed in textual form. Moreover, this knowledge has to be represented in a way that facilitates the reuse of the experiential knowledge by other individuals with different preferences. Our approach has three stages: First, we extract the most salient set of aspects used by the individuals to describe their experiences with the entities in a domain. Second, using the set of extracted aspects, we group them in concepts to create a concept vocabulary that models the set of issues addressed in the reviews. Third, using the vocabulary of concepts, we create a bundle of arguments for each entity. An argument bundle characterizes the pros and cons of an entity, aggregating practical knowledge from judgments written by individuals with different biases and preferences. Moreover, we show how argument bundles allow us to define the notions of user query and the satisfaction degree of a bundle by a user query, proving that argument bundles are not only capable of representing practical knowledge but they are also useful to perform inference given a set of user preferences specified in a query. We evaluate the argument bundles of our approach with the Amazon score ratings and the camera characterizations of Dpreview. We show that pro and con arguments are very close to those listed in Dpreview. Evaluating entity rankings, we show that Dpreview and our approach give congruent rankings, while Amazon's is not congruent neither with Dpreview's or ours
    corecore