31,638 research outputs found
qwLSH: Cache-conscious Indexing for Processing Similarity Search Query Workloads in High-Dimensional Spaces
Similarity search queries in high-dimensional spaces are an important type of
queries in many domains such as image processing, machine learning, etc. Since
exact similarity search indexing techniques suffer from the well-known curse of
dimensionality in high-dimensional spaces, approximate search techniques are
often utilized instead. Locality Sensitive Hashing (LSH) has been shown to be
an effective approximate search method for solving similarity search queries in
high-dimensional spaces. Often times, queries in real-world settings arrive as
part of a query workload. LSH and its variants are particularly designed to
solve single queries effectively. They suffer from one major drawback while
executing query workloads: they do not take into consideration important data
characteristics for effective cache utilization while designing the index
structures. In this paper, we present qwLSH, an index structure for efficiently
processing similarity search query workloads in high-dimensional spaces. We
intelligently divide a given cache during processing of a query workload by
using novel cost models. Experimental results show that, given a query
workload, qwLSH is able to perform faster than existing techniques due to its
unique cost models and strategies.Comment: Extended version of the published wor
Efficient Compression Technique for Sparse Sets
Recent technological advancements have led to the generation of huge amounts
of data over the web, such as text, image, audio and video. Most of this data
is high dimensional and sparse, for e.g., the bag-of-words representation used
for representing text. Often, an efficient search for similar data points needs
to be performed in many applications like clustering, nearest neighbour search,
ranking and indexing. Even though there have been significant increases in
computational power, a simple brute-force similarity-search on such datasets is
inefficient and at times impossible. Thus, it is desirable to get a compressed
representation which preserves the similarity between data points. In this
work, we consider the data points as sets and use Jaccard similarity as the
similarity measure. Compression techniques are generally evaluated on the
following parameters --1) Randomness required for compression, 2) Time required
for compression, 3) Dimension of the data after compression, and 4) Space
required to store the compressed data. Ideally, the compressed representation
of the data should be such, that the similarity between each pair of data
points is preserved, while keeping the time and the randomness required for
compression as low as possible.
We show that the compression technique suggested by Pratap and Kulkarni also
works well for Jaccard similarity. We present a theoretical proof of the same
and complement it with rigorous experimentations on synthetic as well as
real-world datasets. We also compare our results with the state-of-the-art
"min-wise independent permutation", and show that our compression algorithm
achieves almost equal accuracy while significantly reducing the compression
time and the randomness
High-dimensional indexing methods utilizing clustering and dimensionality reduction
The emergence of novel database applications has resulted in the prevalence of a new paradigm for similarity search. These applications include multimedia databases, medical imaging databases, time series databases, DNA and protein sequence databases, and many others. Features of data objects are extracted and transformed into high-dimensional data points. Searching for objects becomes a search on points in the high-dimensional feature space. The dissimilarity between two objects is determined by the distance between two feature vectors. Similarity search is usually implemented as nearest neighbor search in feature vector spaces. The cost of processing k-nearest neighbor (k-NN) queries via a sequential scan increases as the number of objects and the number of features increase. A variety of multi-dimensional index structures have been proposed to improve the efficiency of k-NN query processing, which work well in low-dimensional space but lose their efficiency in high-dimensional space due to the curse of dimensionality. This inefficiency is dealt in this study by Clustering and Singular Value Decomposition - CSVD with indexing, Persistent Main Memory - PMM index, and Stepwise Dimensionality Increasing - SDI-tree index.
CSVD is an approximate nearest neighbor search method. The performance of CSVD with indexing is studied and the approximation to the distance in original space is investigated. For a given Normalized Mean Square Error - NMSE, the higher the degree of clustering, the higher the recall. However, more clusters require more disk page accesses. Certain number of clusters can be obtained to achieve a higher recall while maintaining a relatively lower query processing cost.
Clustering and Indexing using Persistent Main Memory - CIPMM framework is motivated by the following consideration: (a) a significant fraction of index pages are accessed randomly, incurring a high positioning time for each access; (b) disk transfer rate is improving 40% annually, while the improvement in positioning time is only 8%; (c) query processing incurs less CPU time for main memory resident than disk resident indices. CIPMM aims at reducing the elapsed time for query processing by utilizing sequential, rather than random disk accesses. A specific instance of the CIPMM framework CIPOP, indexing using Persistent Ordered Partition - OP-tree, is elaborated and compared with clustering and indexing using the SR-tree, CISR. The results show that CIPOP outperforms CISR, and the higher the dimensionality, the higher the performance gains.
The SDI-tree index is motivated by fanouts decrease with dimensionality increasing and shorter vectors reduce cache misses. The index is built by using feature vectors transformed via principal component analysis, resulting in a structure with fewer dimensions at higher levels and increasing the number of dimensions from one level to the other. Dimensions are retained in nonincreasing order of their variance according to a parameter p, which specifies the incremental fraction of variance at each level of the index. Experiments on three datasets have shown that SDL-trees with carefully tuned parameters access fewer disk accesses than SR-trees and VAMSR-trees and incur less CPU time than VA-Files in addition
Improving k-nn search and subspace clustering based on local intrinsic dimensionality
In several novel applications such as multimedia and recommender systems, data is often represented as object feature vectors in high-dimensional spaces. The high-dimensional data is always a challenge for state-of-the-art algorithms, because of the so-called curse of dimensionality . As the dimensionality increases, the discriminative ability of similarity measures diminishes to the point where many data analysis algorithms, such as similarity search and clustering, that depend on them lose their effectiveness. One way to handle this challenge is by selecting the most important features, which is essential for providing compact object representations as well as improving the overall search and clustering performance. Having compact feature vectors can further reduce the storage space and the computational complexity of search and learning tasks.
Support-Weighted Intrinsic Dimensionality (support-weighted ID) is a new promising feature selection criterion that estimates the contribution of each feature to the overall intrinsic dimensionality. Support-weighted ID identifies relevant features locally for each object, and penalizes those features that have locally lower discriminative power as well as higher density. In fact, support-weighted ID measures the ability of each feature to locally discriminate between objects in the dataset.
Based on support-weighted ID, this dissertation introduces three main research contributions: First, this dissertation proposes NNWID-Descent, a similarity graph construction method that utilizes the support-weighted ID criterion to identify and retain relevant features locally for each object and enhance the overall graph quality. Second, with the aim to improve the accuracy and performance of cluster analysis, this dissertation introduces k-LIDoids, a subspace clustering algorithm that extends the utility of support-weighted ID within a clustering framework in order to gradually select the subset of informative and important features per cluster. k-LIDoids is able to construct clusters together with finding a low dimensional subspace for each cluster. Finally, using the compact object and cluster representations from NNWID-Descent and k-LIDoids, this dissertation defines LID-Fingerprint, a new binary fingerprinting and multi-level indexing framework for the high-dimensional data. LID-Fingerprint can be used for hiding the information as a way of preventing passive adversaries as well as providing an efficient and secure similarity search and retrieval for the data stored on the cloud. When compared to other state-of-the-art algorithms, the good practical performance provides an evidence for the effectiveness of the proposed algorithms for the data in high-dimensional spaces
A Learned Index for Exact Similarity Search in Metric Spaces
Indexing is an effective way to support efficient query processing in large
databases. Recently the concept of learned index has been explored actively to
replace or supplement traditional index structures with machine learning models
to reduce storage and search costs. However, accurate and efficient similarity
query processing in high-dimensional metric spaces remains to be an open
challenge. In this paper, a novel indexing approach called LIMS is proposed to
use data clustering and pivot-based data transformation techniques to build
learned indexes for efficient similarity query processing in metric spaces. The
underlying data is partitioned into clusters such that each cluster follows a
relatively uniform data distribution. Data redistribution is achieved by
utilizing a small number of pivots for each cluster. Similar data are mapped
into compact regions and the mapped values are totally ordinal. Machine
learning models are developed to approximate the position of each data record
on the disk. Efficient algorithms are designed for processing range queries and
nearest neighbor queries based on LIMS, and for index maintenance with dynamic
updates. Extensive experiments on real-world and synthetic datasets demonstrate
the superiority of LIMS compared with traditional indexes and state-of-the-art
learned indexes.Comment: 14 pages, 14 figures, submitted to Transactions on Knowledge and Data
Engineerin
Bilinear Random Projections for Locality-Sensitive Binary Codes
Locality-sensitive hashing (LSH) is a popular data-independent indexing
method for approximate similarity search, where random projections followed by
quantization hash the points from the database so as to ensure that the
probability of collision is much higher for objects that are close to each
other than for those that are far apart. Most of high-dimensional visual
descriptors for images exhibit a natural matrix structure. When visual
descriptors are represented by high-dimensional feature vectors and long binary
codes are assigned, a random projection matrix requires expensive complexities
in both space and time. In this paper we analyze a bilinear random projection
method where feature matrices are transformed to binary codes by two smaller
random projection matrices. We base our theoretical analysis on extending
Raginsky and Lazebnik's result where random Fourier features are composed with
random binary quantizers to form locality sensitive binary codes. To this end,
we answer the following two questions: (1) whether a bilinear random projection
also yields similarity-preserving binary codes; (2) whether a bilinear random
projection yields performance gain or loss, compared to a large linear
projection. Regarding the first question, we present upper and lower bounds on
the expected Hamming distance between binary codes produced by bilinear random
projections. In regards to the second question, we analyze the upper and lower
bounds on covariance between two bits of binary codes, showing that the
correlation between two bits is small. Numerical experiments on MNIST and
Flickr45K datasets confirm the validity of our method.Comment: 11 pages, 23 figures, CVPR-201
Adaptive Hash Retrieval with Kernel Based Similarity
Indexing methods have been widely used for fast data retrieval on large scale datasets. When the data are represented by high dimensional vectors, hashing is often used as an efficient solution for approximate similarity search. When a retrieval task does not involve supervised training data, most hashing methods aim at preserving data similarity defined by a distance metric on the feature vectors. Hash codes generated by these approaches normally maintain the Hamming distance of the data in accordance with the similarity function, but ignore the local details of the distribution of data. This objective is not suitable for k-nearest neighbor search since the similarity to the nearest neighbors can vary significantly for different data samples. In this paper, we present a novel adaptive similarity measure which is consistent with k-nearest neighbor search, and prove that it leads to a valid kernel if the original similarity function is a kernel function. Next we propose a method which calculates hash codes using the kernel function. With a low-rank approximation, our hashing framework is more effective than existing methods that preserve similarity over an arbitrary kernel. The proposed similarity function, hashing framework, and their combination demonstrate significant improvement when compared with several alternative state-of-the-art methods
- …