5,345 research outputs found
Zero-Knowledge Accumulators and Set Operations
Accumulators provide a way to succinctly represent a set with elements drawn from a given domain, using an \emph{accumulation value}. Subsequently, short proofs for the set-\emph{membership} (or \emph{non-membership}) of any element from the domain can be constructed and efficiently verified with respect to this accumulation value. Accumulators have been widely studied in the literature, primarily, as an \emph{authentication} primitive:
a malicious prover (e.g., an untrusted server) should not be able to provide convincing proofs on false statements (e.g., successfully prove membership for a value not in the set) to a verifier that issues membership queries (of course, having no access to set itself).
In essence, in existing constructions the accumulation value acts as a (honestly generated) ``commitment\u27\u27 to the set that allows selective ``opening\u27\u27 as specified by membership queries---but with no ``hiding\u27\u27 properties.
In this paper we revisit this primitive and propose a privacy-preserving enhancement. We define the notion of a \emph{zero-knowledge accumulator} that provides the following very strong privacy notion: Accumulation values and proofs constructed during the protocol execution leak nothing about the set itself, or any subsequent updates to it (i.e., via element insertions/deletions). We formalize this property by a standard real/ideal execution game. An adversarial party that is allowed to choose the set and is given access to query and update oracles, cannot distinguish whether this interaction takes place with respect to the honestly executed algorithms of the scheme or with a simulator that is not given access to the set itself (and for updates, it does not even learn the type of update that occurred---let alone the inserted/deleted element). We compare our new privacy definition with other recently proposed similar notions showing that it is strictly stronger: We give a concrete example of the update-related information that can be leaked by previous definitions.
We provide a mapping of the relations between zero-knowledge accumulators and primitives that
are either set in the same security model or solve the same problem.
We formally show and discuss a number of implications among primitives, some of which are not immediately evident.
We believe this contribution is interesting on its own, as the area has received considerable attention recently (e.g., with the works of [Naor et al., TCC~2015] and [Derler et al., CT-RSA~2015]).
We then construct the first dynamic universal zero-knowledge accumulator. Our scheme is perfect zero-knowledge and is secure under the -Strong Bilinear Diffie-Hellman assumption.
Finally, building on our dynamic universal zero-knowledge accumulator, we define a \emph{zero-knowledge authenticated set collection} to handle more elaborate set operations (beyond set-membership). In particular, this primitive allows one to outsource a collection of sets to an untrusted server that is subsequently responsible for answering union, intersection and set difference queries over these sets issued by multiple clients. Our scheme provides proofs that are succinct and efficiently verifiable and, at the same time, leak nothing beyond the query result. In particular, it offers verification time that is asymptotically optimal (namely, the same as simply reading the answer), and proof construction that is asymptotically as efficient as existing state-of-the-art constructions--- that however, do not offer privacy
An Alternative Paradigm for Developing and Pricing Storage on Smart Contract Platforms
Smart contract platforms facilitate the development of important and diverse
distributed applications in a simple manner. This simplicity stems from the
inherent utility of employing the state of smart contracts to store, query and
verify the validity of application data. In Ethereum, data storage incurs an
underpriced, non-recurring, predefined fee. Furthermore, as there is no
incentive for freeing or minimizing the state of smart contracts, Ethereum is
faced with a tragedy of the commons problem with regards to its monotonically
increasing state. This issue, if left unchecked, may lead to centralization and
directly impact Ethereum's security and longevity. In this work, we introduce
an alternative paradigm for developing smart contracts in which their state is
of constant size and facilitates the verification of application data that are
stored to and queried from an external, potentially unreliable, storage
network. This approach is relevant for a wide range of applications, such as
any key-value store. We evaluate our approach by adapting the most widely
deployed standard for fungible tokens, i.e., the ERC20 token standard. We show
that Ethereum's current cost model penalizes our approach, even though it
minimizes the overhead to Ethereum's state and aligns well with Ethereum's
future. We address Ethereum's monotonically increasing state in a two-fold
manner. First, we introduce recurring fees that are proportional to the state
of smart contracts and adjustable by the miners that maintain the network.
Second, we propose a scheme where the cost of storage-related operations
reflects the effort that miners have to expend to execute them. Lastly, we show
that under such a pricing scheme that encourages economy in the state consumed
by smart contracts, our ERC20 token adaptation reduces the incurred transaction
fees by up to an order of magnitude.Comment: 6 pages, 2 figures, DAPPCON 201
XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary Neural Network Inference
Binary Neural Networks (BNNs) are promising to deliver accuracy comparable to
conventional deep neural networks at a fraction of the cost in terms of memory
and energy. In this paper, we introduce the XNOR Neural Engine (XNE), a fully
digital configurable hardware accelerator IP for BNNs, integrated within a
microcontroller unit (MCU) equipped with an autonomous I/O subsystem and hybrid
SRAM / standard cell memory. The XNE is able to fully compute convolutional and
dense layers in autonomy or in cooperation with the core in the MCU to realize
more complex behaviors. We show post-synthesis results in 65nm and 22nm
technology for the XNE IP and post-layout results in 22nm for the full MCU
indicating that this system can drop the energy cost per binary operation to
21.6fJ per operation at 0.4V, and at the same time is flexible and performant
enough to execute state-of-the-art BNN topologies such as ResNet-34 in less
than 2.2mJ per frame at 8.9 fps.Comment: 11 pages, 8 figures, 2 tables, 3 listings. Accepted for presentation
at CODES'18 and for publication in IEEE Transactions on Computer-Aided Design
of Circuits and Systems (TCAD) as part of the ESWEEK-TCAD special issu
An Overview of Cryptographic Accumulators
This paper is a primer on cryptographic accumulators and how to apply them
practically. A cryptographic accumulator is a space- and time-efficient data
structure used for set-membership tests. Since it is possible to represent any
computational problem where the answer is yes or no as a set-membership
problem, cryptographic accumulators are invaluable data structures in computer
science and engineering. But, to the best of our knowledge, there is neither a
concise survey comparing and contrasting various types of accumulators nor a
guide for how to apply the most appropriate one for a given application.
Therefore, we address that gap by describing cryptographic accumulators while
presenting their fundamental and so-called optional properties. We discuss the
effects of each property on the given accumulator's performance in terms of
space and time complexity, as well as communication overhead.Comment: Note: This is an extended version of a paper published In Proceedings
of the 7th International Conference on Information Systems Security and
Privacy (ICISSP 2021), pages 661-66
ANS hard X-ray experiment development program
The hard X-ray (HXX) experiment is one of three experiments included in the Dutch Astronomical Netherlands Satellite, which was launched into orbit on 30 August 1974. The overall objective of the HXX experiment is the detailed study of the emission from known X-ray sources over the energy range 1.5-30keV. The instrument is capable of the following measurements: (1) spectral content over the full energy range with an energy resolution of approximately 20% and time resolution down to 4 seconds; (2) source time variability down to 4 milliseconds; (3) silicon emission lines at 1.86 and 2.00keV; (4) source location to a limit of one arc minute in ecliptic latitude; and (5) spatial structure with angular resolution of the arc minutes. Scientific aspects of experiment, engineering design and implementation of the experiment, and program history are included
Regular and almost universal hashing: an efficient implementation
Random hashing can provide guarantees regarding the performance of data
structures such as hash tables---even in an adversarial setting. Many existing
families of hash functions are universal: given two data objects, the
probability that they have the same hash value is low given that we pick hash
functions at random. However, universality fails to ensure that all hash
functions are well behaved. We further require regularity: when picking data
objects at random they should have a low probability of having the same hash
value, for any fixed hash function. We present the efficient implementation of
a family of non-cryptographic hash functions (PM+) offering good running times,
good memory usage as well as distinguishing theoretical guarantees: almost
universality and component-wise regularity. On a variety of platforms, our
implementations are comparable to the state of the art in performance. On
recent Intel processors, PM+ achieves a speed of 4.7 bytes per cycle for 32-bit
outputs and 3.3 bytes per cycle for 64-bit outputs. We review vectorization
through SIMD instructions (e.g., AVX2) and optimizations for superscalar
execution.Comment: accepted for publication in Software: Practice and Experience in
September 201
- …