92,252 research outputs found
HardIDX: Practical and Secure Index with SGX
Software-based approaches for search over encrypted data are still either
challenged by lack of proper, low-leakage encryption or slow performance.
Existing hardware-based approaches do not scale well due to hardware
limitations and software designs that are not specifically tailored to the
hardware architecture, and are rarely well analyzed for their security (e.g.,
the impact of side channels). Additionally, existing hardware-based solutions
often have a large code footprint in the trusted environment susceptible to
software compromises. In this paper we present HardIDX: a hardware-based
approach, leveraging Intel's SGX, for search over encrypted data. It implements
only the security critical core, i.e., the search functionality, in the trusted
environment and resorts to untrusted software for the remainder. HardIDX is
deployable as a highly performant encrypted database index: it is logarithmic
in the size of the index and searches are performed within a few milliseconds
rather than seconds. We formally model and prove the security of our scheme
showing that its leakage is equivalent to the best known searchable encryption
schemes. Our implementation has a very small code and memory footprint yet
still scales to virtually unlimited search index sizes, i.e., size is limited
only by the general - non-secure - hardware resources
Evolutionary improvement of programs
Most applications of genetic programming (GP) involve the creation of an entirely new function, program or expression to solve a specific problem. In this paper, we propose a new approach that applies GP to improve existing software by optimizing its non-functional properties such as execution time, memory usage, or power consumption. In general, satisfying non-functional requirements is a difficult task and often achieved in part by optimizing compilers. However, modern compilers are in general not always able to produce semantically equivalent alternatives that optimize non-functional properties, even if such alternatives are known to exist: this is usually due to the limited local nature of such optimizations. In this paper, we discuss how best to combine and extend the existing evolutionary methods of GP, multiobjective optimization, and coevolution in order to improve existing software. Given as input the implementation of a function, we attempt to evolve a semantically equivalent version, in this case optimized to reduce execution time subject to a given probability distribution of inputs. We demonstrate that our framework is able to produce non-obvious optimizations that compilers are not yet able to generate on eight example functions. We employ a coevolved population of test cases to encourage the preservation of the function's semantics. We exploit the original program both through seeding of the population in order to focus the search, and as an oracle for testing purposes. As well as discussing the issues that arise when attempting to improve software, we employ rigorous experimental method to provide interesting and practical insights to suggest how to address these issues
POPE: Partial Order Preserving Encoding
Recently there has been much interest in performing search queries over
encrypted data to enable functionality while protecting sensitive data. One
particularly efficient mechanism for executing such queries is order-preserving
encryption/encoding (OPE) which results in ciphertexts that preserve the
relative order of the underlying plaintexts thus allowing range and comparison
queries to be performed directly on ciphertexts. In this paper, we propose an
alternative approach to range queries over encrypted data that is optimized to
support insert-heavy workloads as are common in "big data" applications while
still maintaining search functionality and achieving stronger security.
Specifically, we propose a new primitive called partial order preserving
encoding (POPE) that achieves ideal OPE security with frequency hiding and also
leaves a sizable fraction of the data pairwise incomparable. Using only O(1)
persistent and non-persistent client storage for
, our POPE scheme provides extremely fast batch insertion
consisting of a single round, and efficient search with O(1) amortized cost for
up to search queries. This improved security and
performance makes our scheme better suited for today's insert-heavy databases.Comment: Appears in ACM CCS 2016 Proceeding
Prefix Codes for Power Laws with Countable Support
In prefix coding over an infinite alphabet, methods that consider specific
distributions generally consider those that decline more quickly than a power
law (e.g., Golomb coding). Particular power-law distributions, however, model
many random variables encountered in practice. For such random variables,
compression performance is judged via estimates of expected bits per input
symbol. This correspondence introduces a family of prefix codes with an eye
towards near-optimal coding of known distributions. Compression performance is
precisely estimated for well-known probability distributions using these codes
and using previously known prefix codes. One application of these near-optimal
codes is an improved representation of rational numbers.Comment: 5 pages, 2 tables, submitted to Transactions on Information Theor
Postcondition-preserving fusion of postorder tree transformations
Tree transformations are commonly used in applications such as program rewriting in compilers. Using a series of simple transformations to build a more complex system can make the resulting software easier to understand, maintain, and reason about. Fusion strategies for combining such successive tree transformations promote this modularity, whilst mitigating the performance impact from increased numbers of tree traversals. However, it is important to ensure that fused transformations still perform their intended tasks. Existing approaches to fusing tree transformations tend to take an informal approach to soundness, or be too restrictive to consider the kind of transformations needed in a compiler. We use postconditions to define a more useful formal notion of successful fusion, namely postcondition-preserving fusion. We also present criteria that are sufficient to ensure postcondition-preservation and facilitate modular reasoning about the success of fusion
Privacy-Preserving Secret Shared Computations using MapReduce
Data outsourcing allows data owners to keep their data at \emph{untrusted}
clouds that do not ensure the privacy of data and/or computations. One useful
framework for fault-tolerant data processing in a distributed fashion is
MapReduce, which was developed for \emph{trusted} private clouds. This paper
presents algorithms for data outsourcing based on Shamir's secret-sharing
scheme and for executing privacy-preserving SQL queries such as count,
selection including range selection, projection, and join while using MapReduce
as an underlying programming model. Our proposed algorithms prevent an
adversary from knowing the database or the query while also preventing
output-size and access-pattern attacks. Interestingly, our algorithms do not
involve the database owner, which only creates and distributes secret-shares
once, in answering any query, and hence, the database owner also cannot learn
the query. Logically and experimentally, we evaluate the efficiency of the
algorithms on the following parameters: (\textit{i}) the number of
communication rounds (between a user and a server), (\textit{ii}) the total
amount of bit flow (between a user and a server), and (\textit{iii}) the
computational load at the user and the server.\BComment: IEEE Transactions on Dependable and Secure Computing, Accepted 01
Aug. 201
Hashing for Similarity Search: A Survey
Similarity search (nearest neighbor search) is a problem of pursuing the data
items whose distances to a query item are the smallest from a large database.
Various methods have been developed to address this problem, and recently a lot
of efforts have been devoted to approximate search. In this paper, we present a
survey on one of the main solutions, hashing, which has been widely studied
since the pioneering work locality sensitive hashing. We divide the hashing
algorithms two main categories: locality sensitive hashing, which designs hash
functions without exploring the data distribution and learning to hash, which
learns hash functions according the data distribution, and review them from
various aspects, including hash function design and distance measure and search
scheme in the hash coding space
- …