8 research outputs found

    Online List Labeling with Predictions

    Full text link
    A growing line of work shows how learned predictions can be used to break through worst-case barriers to improve the running time of an algorithm. However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. This paper takes a step in this direction by showing that predictions can be leveraged in the fundamental online list labeling problem. In the problem, n items arrive over time and must be stored in sorted order in an array of size Theta(n). The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled). We design a new list labeling data structure and bound its performance in two models. In the worst-case learning-augmented model, we give guarantees in terms of the error in the predictions. Our data structure provides strong guarantees: it is optimal for any prediction error and guarantees the best-known worst-case bound even when the predictions are entirely erroneous. We also consider a stochastic error model and bound the performance in terms of the expectation and variance of the error. Finally, the theoretical results are demonstrated empirically. In particular, we show that our data structure has strong performance on real temporal data sets where predictions are constructed from elements that arrived in the past, as is typically done in a practical use case

    Anti-Persistence on Persistent Storage: History-Independent Sparse Tables and Dictionaries

    Get PDF
    International audienceWe present history-independent alternatives to a B-tree, the primary indexing data structure used in databases. A data structure is history independent (HI) if it is impossible to deduce any information by examining the bit representation of the data structure that is not already available through the API. We show how to build a history-independent cache-oblivious B-tree and a history-independent external-memory skip list. One of the main contributions is a data structure we build on the way—a history-independent packed-memory array (PMA). The PMA supports efficient range queries, one of the most important operations for answering database queries. Our HI PMA matches the asymptotic bounds of prior non-HI packed-memory arrays and sparse tables. Specifically, a PMA maintains a dynamic set of elements in sorted order in a linear-sized array. Inserts and deletes take an amortized O(log^2 N) element moves with high probability. Simple experiments with our implementation of HI PMAs corroborate our theoretical analysis. Comparisons to regular PMAs give preliminary indications that the practical cost of adding history-independence is not too large. Our HI cache-oblivious B-tree bounds match those of prior non-* HI cache-oblivious B-trees. Searches take O(log_B N) I/Os; inserts and deletes take O((log^2 N)/B + log_B N) amortized I/Os with high probability; and range queries returning k elements take O(log_B N + k/B) I/Os. Our HI external-memory skip list achieves optimal bounds with high probability, analogous to in-memory skip lists: O(log_B N) I/Os for point queries and amortized O(log_B N) I/Os for in-serts/deletes. Range queries returning k elements run in O(log_B N + k/B) I/Os. In contrast, the best possible high-probability bounds for inserting into the folklore B-skip list, which promotes elements with probability 1/B, is just Θ(log N) I/Os. This is no better than the bounds one gets from running an in-memory skip list in external memory

    Paralelização do algoritmo de indexação de dados multimídia baseado em quantização

    Get PDF
    Trabalho de Conclusão de Curso (graduação)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2019.A busca por similaridade em espaços de alta dimensionalidade é uma operação fundamental em diversas aplicações de recuperação de dados multimídia, no entanto essa operação é tipicamente uma das mais computacionalmente caras. Alguns métodos propõem a busca aproximada para minimizar esse problema, uma alternativa que tenta fazer um compromisso entre o custo computacional e a precisão da busca. Um dos métodos baseados em busca aproximada é o Product Quantization for Approximate Nearest Neighbor Search (PQANNS), que propõe a decomposição do espaço de busca em um produto cartesiano de subespaços de baixa dimensionalidade e a quantização de cada um deles separadamente. Para tanto, é utilizada uma estrutura de lista invertida para fazer a indexação dos dados, o que permite a realização de buscas não-exaustivas. A redução da dimensionalidade dos dados aliada à busca não-exaustiva faz com que o PQANNS responda consultas de forma eficiente e com baixa demanda de memória, no entanto sua execução sequencial ainda é limitada a trabalhar com bases que caibam na memória RAM de apenas uma máquina. Nosso objetivo é propor uma paralelização em memória distribuída do PQANNS, sendo assim capaz de lidar com grandes bases de dados. Também propomos uma paralelização em máquina multicore, visando reduzir o tempo de resposta às consultas e utilizar toda a capacidade de processamento disponível. Nossa paralelização em memória distribuída foi avaliada utilizando 128 nós/3584 núcleos de CPU, obtendo uma eficiência de 0.97 e foi capaz de realizar a indexação e busca em uma base de dados contendo 256 bilhões de vetores Scale Invariant Feature Transform (SIFT). Além disso, a execução da nossa paralelização em máquina multicore obteve um excelente ganho em desempenho com até 28 núcleos, obtendo um speedup médio de 26, 36x utilizando todos os núcleos.The search for similarity in high dimensional spaces is a core operation found in several multimedia retrieval applications. However this operation is typically one of the most computationally expensive. Some methods propose an approximate search to minimize this problem, trying to make a trade-off between computational cost and search precision. One of these methods is the Product Quantization for Approximate Nearest Neighbor Search (PQANNS), which proposes the decomposition of the search space into a Cartesian product of low-dimensional subspaces and the quantization of each of them separately. In order to do so, an inverted file structure is used to index the data, which allows non-exhaustive searches. The reduction of data dimensionality coupled with the non-exhaustive search causes the PQANNS to respond efficiently and with low memory requirements, however its sequential execution is still limited to working with bases that fit into the RAM memory of a single machine. Our goal is to propose a parallelization strategy that works on distributed memory plataforms of PQANNS, thus being able to handle large databases. We also propose a multicore machine parallelization, in order to reduce the response time to the queries and to use all available processing capacity. Our distributed memory parallelization was evaluated using 128 nodes/3584 CPU cores, obtaining an efficiency of 0.97 and was able to perform the index and search in a database containing 256 billion Scale Invariant Feature Transform (SIFT) vectors. In addition, the execution of our parallelization in a multicore machine obtained a performance gain with up to 28 cores, obtaining an average speedup of 26.36x using all the cores

    Incremental Recomputation of Pig Latin’s Nest and Unnest Operators

    Get PDF
    This master’s thesis addresses the maintenance of pre-computed structures, which store a frequent or expensive query, for the nested bag data type in the high level work-flow language Pig Latin. This thesis defines a model suitable to accommodate incremental expressions over nested bags on Pig Latin. Afterwards, the partitioned normal form for sets is extended with further restrictions, in order to accommodate the nested bag model, allow the Pig Latin nest and unnest operators revert each other, and create a suitable environment to the incremental computations. Subsequently, the extended operators – extended union and extended difference – are defined for the nested bag data model with the partitioned normal form for bags (PNF Bag) restriction, and semantics for the extended operators are given. Finally, incremental data propagation expressions are proposed for the nest and unnest operators on the data model proposed with the PNF Bag restriction, and the proof of correctness is given.Esta tese de mestrado aborda a problemática da manutenção de estruturas pré-cumputas, que – regra geral – armazenam uma pesquisa computacionalmente dispendiosa ou frequente, para a linguagem de fluxo de dados de alto nível Pig Latin. Esta tese começa por definir um modelo de dados para acomodar o tipo de dados nested bag do Pig Latin. Posteriormente a partitioned normal form para sets ´e transformada para poder: acomodar o tipo de dados nested bag, permitir que os operadores do Pig Latin nest e unnest possam reverter-se mutuamente, e para criar um ambiente propicio `a utilização de expressões incrementais. De seguida os operadores extendidos – extended union e extended difference – são definidos para o modelo nested bag com a restrição partitioned normal form para bags (PNF Bag), sendo a semântica dos operadores também dada nesta tese. Por fim, são propostas expressões incrementais de propagação da informação para os operadores nest e unnest no modelo proposto com a restrição PNF Bag
    corecore