2,830 research outputs found
FLASH: Randomized Algorithms Accelerated over CPU-GPU for Ultra-High Dimensional Similarity Search
We present FLASH (\textbf{F}ast \textbf{L}SH \textbf{A}lgorithm for
\textbf{S}imilarity search accelerated with \textbf{H}PC), a similarity search
system for ultra-high dimensional datasets on a single machine, that does not
require similarity computations and is tailored for high-performance computing
platforms. By leveraging a LSH style randomized indexing procedure and
combining it with several principled techniques, such as reservoir sampling,
recent advances in one-pass minwise hashing, and count based estimations, we
reduce the computational and parallelization costs of similarity search, while
retaining sound theoretical guarantees.
We evaluate FLASH on several real, high-dimensional datasets from different
domains, including text, malicious URL, click-through prediction, social
networks, etc. Our experiments shed new light on the difficulties associated
with datasets having several million dimensions. Current state-of-the-art
implementations either fail on the presented scale or are orders of magnitude
slower than FLASH. FLASH is capable of computing an approximate k-NN graph,
from scratch, over the full webspam dataset (1.3 billion nonzeros) in less than
10 seconds. Computing a full k-NN graph in less than 10 seconds on the webspam
dataset, using brute-force (), will require at least 20 teraflops. We
provide CPU and GPU implementations of FLASH for replicability of our results
Digital Image Access & Retrieval
The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio
Online Product Quantization
Approximate nearest neighbor (ANN) search has achieved great success in many
tasks. However, existing popular methods for ANN search, such as hashing and
quantization methods, are designed for static databases only. They cannot
handle well the database with data distribution evolving dynamically, due to
the high computational effort for retraining the model based on the new
database. In this paper, we address the problem by developing an online product
quantization (online PQ) model and incrementally updating the quantization
codebook that accommodates to the incoming streaming data. Moreover, to further
alleviate the issue of large scale computation for the online PQ update, we
design two budget constraints for the model to update partial PQ codebook
instead of all. We derive a loss bound which guarantees the performance of our
online PQ model. Furthermore, we develop an online PQ model over a sliding
window with both data insertion and deletion supported, to reflect the
real-time behaviour of the data. The experiments demonstrate that our online PQ
model is both time-efficient and effective for ANN search in dynamic large
scale databases compared with baseline methods and the idea of partial PQ
codebook update further reduces the update cost.Comment: To appear in IEEE Transactions on Knowledge and Data Engineering
(DOI: 10.1109/TKDE.2018.2817526
Weightless: Lossy Weight Encoding For Deep Neural Network Compression
The large memory requirements of deep neural networks limit their deployment
and adoption on many devices. Model compression methods effectively reduce the
memory requirements of these models, usually through applying transformations
such as weight pruning or quantization. In this paper, we present a novel
scheme for lossy weight encoding which complements conventional compression
techniques. The encoding is based on the Bloomier filter, a probabilistic data
structure that can save space at the cost of introducing random errors.
Leveraging the ability of neural networks to tolerate these imperfections and
by re-training around the errors, the proposed technique, Weightless, can
compress DNN weights by up to 496x with the same model accuracy. This results
in up to a 1.51x improvement over the state-of-the-art
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Pre-trained encoders are general-purpose feature extractors that can be used
for many downstream tasks. Recent progress in self-supervised learning can
pre-train highly effective encoders using a large volume of unlabeled data,
leading to the emerging encoder as a service (EaaS). A pre-trained encoder may
be deemed confidential because its training requires lots of data and
computation resources as well as its public release may facilitate misuse of
AI, e.g., for deepfakes generation. In this paper, we propose the first attack
called StolenEncoder to steal pre-trained image encoders. We evaluate
StolenEncoder on multiple target encoders pre-trained by ourselves and three
real-world target encoders including the ImageNet encoder pre-trained by
Google, CLIP encoder pre-trained by OpenAI, and Clarifai's General Embedding
encoder deployed as a paid EaaS. Our results show that our stolen encoders have
similar functionality with the target encoders. In particular, the downstream
classifiers built upon a target encoder and a stolen one have similar accuracy.
Moreover, stealing a target encoder using StolenEncoder requires much less data
and computation resources than pre-training it from scratch. We also explore
three defenses that perturb feature vectors produced by a target encoder. Our
results show these defenses are not enough to mitigate StolenEncoder.Comment: To appear in ACM Conference on Computer and Communications Security
(CCS), 202
A Data Science Course for Undergraduates: Thinking with Data
Data science is an emerging interdisciplinary field that combines elements of
mathematics, statistics, computer science, and knowledge in a particular
application domain for the purpose of extracting meaningful information from
the increasingly sophisticated array of data available in many settings. These
data tend to be non-traditional, in the sense that they are often live, large,
complex, and/or messy. A first course in statistics at the undergraduate level
typically introduces students with a variety of techniques to analyze small,
neat, and clean data sets. However, whether they pursue more formal training in
statistics or not, many of these students will end up working with data that is
considerably more complex, and will need facility with statistical computing
techniques. More importantly, these students require a framework for thinking
structurally about data. We describe an undergraduate course in a liberal arts
environment that provides students with the tools necessary to apply data
science. The course emphasizes modern, practical, and useful skills that cover
the full data analysis spectrum, from asking an interesting question to
acquiring, managing, manipulating, processing, querying, analyzing, and
visualizing data, as well communicating findings in written, graphical, and
oral forms.Comment: 21 pages total including supplementary material
- …