845 research outputs found
Minimal Complete Primitives for Secure Multi-Party Computation
The study of minimal cryptographic primitives needed to implement secure computation among two or more players is a fundamental question in cryptography. The issue of complete primitives for the case of two players has been thoroughly studied. However, in the multi-party setting, when there are n > 2 players and t of them are corrupted, the question of what are the simplest complete primitives remained open for t ≥ n/3. (A primitive is called complete if any computation can be carried out by the players having access only to the primitive and local computation.) In this paper we consider this question, and introduce complete primitives of minimal cardinality for secure multi-party computation. The cardinality issue (number of players accessing the primitive) is essential in settings where primitives are implemented by some other means, and the simpler the primitive the easier it is to realize. We show that our primitives are complete and of minimal cardinality possible for most case
Bayesian additive regression trees for probabilistic programming
Bayesian additive regression trees (BART) is a non-parametric method to
approximate functions. It is a black-box method based on the sum of many trees
where priors are used to regularize inference, mainly by restricting trees'
learning capacity so that no individual tree is able to explain the data, but
rather the sum of trees. We discuss BART in the context of probabilistic
programming languages (PPL), i.e., we present BART as a primitive that can be
used as a component of a probabilistic model rather than as a standalone model.
Specifically, we introduce the Python library PyMC-BART, which works by
extending PyMC, a library for probabilistic programming. We showcase a few
examples of models that can be built using PyMC-BART, discuss recommendations
for the selection of hyperparameters, and finally, we close with limitations of
our implementation and future directions for improvement.Comment: 22 pages, 17 figure
How Does Nakamoto Set His Clock? Full Analysis of Nakamoto Consensus in Bounded-Delay Networks
Nakamoto consensus, arguably the most exciting development in distributed computing in the last few years, is in a sense a recasting of the traditional state-machine-replication problem in an unauthenticated setting, where furthermore parties come and go without warning. The protocol relies on a cryptographic primitive known as proof of work (PoW) which is used to throttle message passing. Importantly, the PoW difficulty level is appropriately adjusted throughout the course of the protocol execution relying on the blockchain’s timekeeping ability.
While the original formulation was only accompanied by rudimentary analysis, significant and steady progress has been made in abstracting the protocol’s properties and providing a formal analysis under various restrictions and protocol simplifications. Still, a full analysis of the protocol that includes its target recalculation and, notably, the timestamp adjustment mechanism—specifically, the protocol allows incoming block timestamps in the near future, as determined by a protocol parameter, and rejects blocks that have a timestamp in the past of the median time of a specific number of blocks on-chain (namely, 11)— which equip it to operate in its intended setting of bounded communication delays, imperfect clocks and dynamic participation, has remained open.
The gap is that Nakamoto’s protocol fundamentally depends on the blockchain itself to be a consistent timekeeper that should advance roughly on par with real time. In order to tackle this question we introduce a new analytical tool that we call hot-hand executions, which capture the regular occurrence of high concentration of honestly generated blocks, and correspondingly put forth and prove a new blockchain property called concentrated chain quality, which may be of independent interest. Utilizing these tools and techniques we demonstrate that Nakamoto’s protocol achieves, under suitable conditions, safety, liveness as well as (consistent) timekeeping
MAC Precomputation with Applications to Secure Memory
We present ShMAC (Shallow MAC), a fixed input length message authentication code that performs most of the computation prior to the availability of the message. Specifically, ShMAC\u27s message-dependent computation is much faster and smaller in hardware than the evaluation of a pseudorandom permutation (PRP), and can be implemented by a small shallow circuit, while its precomputation consists of one PRP evaluation. A main building block for ShMAC is the notion of strong differential uniformity (SDU), which we introduce, and which may be of independent interest. We present an efficient SDU construction built from previously considered differentially uniform functions.
Our motivating application is a system architecture where a hardware-secured processor uses memory controlled by an adversary. We present in technical detail a novel, more efficient approach to encrypting and authenticating memory and discuss the associated trade-offs, while paying special attention to minimizing hardware costs and the reduction of DRAM latency
The Bitcoin Backbone Protocol with Chains of Variable Difficulty
Bitcoin’s innovative and distributedly maintained blockchain data structure hinges on the adequate degree of difficulty of so-called “proofs of work,” which miners have to produce in order for transactions to be inserted. Importantly, these proofs of work have to be hard enough so that miners have an opportunity to unify their views in the presence of an adversary who interferes but has bounded computational power, but easy enough to be solvable regularly and enable the miners to make progress. As such, as the miners’ population evolves over time, so should the difficulty of these proofs. Bitcoin provides this adjustment mechanism, with empirical evidence of a constant block generation rate against such population changes.
In this paper we provide the first (to our knowledge) formal analysis of Bitcoin’s target (re)calculation function in the cryptographic setting, i.e., against all possible adversaries aiming to subvert the protocol’s properties. We extend the q-bounded synchronous model of the Bitcoin backbone protocol [Eurocrypt 2015], which posed the basic properties of Bitcoin’s underlying blockchain data structure and shows how a robust public transaction ledger can be built on top of them, to environments that may introduce or suspend parties in each round. We provide a set of necessary conditions with respect to the way the population evolves under which the “Bitcoin backbone with chains of variable difficulty” provides a robust transaction ledger in the presence of an actively malicious adversary controlling a fraction of the miners strictly below 50% in each instant of the execution. Our work introduces new analysis techniques and tools to the area of blockchain systems that may prove useful in analyzing other blockchain protocols
Somewhat Non-Committing Encryption and Efficient Adaptively Secure Oblivious Transfer
Designing efficient cryptographic protocols tolerating adaptive
adversaries, who are able to corrupt parties on the fly as the
computation proceeds, has been an elusive task. Indeed, thus far no
\emph{efficient} protocols achieve adaptive security for general
multi-party computation, or even for many specific two-party tasks
such as oblivious transfer (OT). In fact, it is difficult and
expensive to achieve adaptive security even for the task of
\emph{secure communication}, which is arguably the most basic task
in cryptography.
In this paper we make progress in this area. First, we introduce a
new notion called \emph{semi-adaptive} security which is slightly
stronger than static security but \emph{significantly weaker than
fully adaptive security}. The main difference between adaptive and
semi-adaptive security is that, for semi-adaptive security, the
simulator is not required to handle the case where \emph{both}
parties start out honest and one becomes corrupted later on during
the protocol execution. As such, semi-adaptive security is much
easier to achieve than fully adaptive security. We then give a
simple, generic protocol compiler which transforms any
semi-adaptively secure protocol into a fully adaptively secure one.
The compilation effectively decomposes the problem of adaptive
security into two (simpler) problems which can be tackled
separately: the problem of semi-adaptive security and the problem of
realizing a weaker variant of secure channels.
We solve the latter problem by means of a new primitive that we call
{\em somewhat non-committing encryption} resulting in significant
efficiency improvements over the standard method for realizing
(fully) secure channels using (fully) non-committing encryption.
Somewhat non-committing encryption has two parameters: an
equivocality parameter (measuring the number of ways that a
ciphertext can be ``opened\u27\u27) and the message sizes . Our
implementation is very efficient for small values ,
\emph{even} when is large. This translates into a very efficient
compilation of many semi-adaptively secure protocols (in particular,
for a task with small input/output domains such as bit-OT) into a
fully adaptively secure protocol.
Finally, we showcase
our methodology by applying it to the recent Oblivious Transfer
protocol by Peikert \etal\ [Crypto 2008], which is only secure
against static corruptions, to obtain the first efficient, adaptively secure and composable OT protocol.
In particular, to transfer an -bit message, we use a constant number of rounds and public key operations
A Framework for the Sound Specification of Cryptographic Tasks
Nowadays it is widely accepted to formulate the security of a protocol
carrying out a given task via the ``trusted-party paradigm,\u27\u27 where
the protocol execution is compared with an ideal process where the
outputs are computed by a trusted party that sees all the inputs. A
protocol is said to securely carry out a given task if running the
protocol with a realistic adversary amounts to ``emulating\u27\u27 the ideal
process with the appropriate trusted party. In the Universal
Composability (UC) framework the program run by the trusted party is
called an {\em ideal functionality}. While this simulation-based
security formulation provides strong security guarantees, its
usefulness is contingent on the properties and correct specification
of the ideal functionality, which, as demonstrated in recent years by
the coexistence of complex, multiple functionalities for the same task
as well as by their ``unstable\u27\u27 nature, does not seem to be an easy
task.
In this paper we address this problem, by introducing a general methodology for the sound specification of ideal functionalities.
First, we introduce the class of {\em canonical} ideal functionalities
for a cryptographic task, which unifies the syntactic specification of a large class of cryptographic tasks under the same basic template functionality.
%
Furthermore, this representation enables the isolation of the
individual properties of a cryptographic task as separate members of
the corresponding class. By endowing the class of canonical
functionalities with an algebraic structure we are able to combine
basic functionalities to a single final canonical functionality for a
given task. Effectively, this puts forth a bottom-up
approach for the specification of ideal functionalities: first one
defines a set of basic constituent functionalities for the task at
hand, and then combines them into a single
ideal functionality taking advantage of the algebraic structure.
In our framework, the constituent functionalities of a task can be
derived either directly or, following a translation strategy we
introduce, from existing game-based definitions; such definitions have
in many cases captured desired individual properties of cryptographic
tasks, albeit in less adversarial settings.
Our translation methodology entails a sequence of steps
that systematically derive a corresponding canonical functionality given a game-based
definition, effectively ``lifting\u27\u27 the game-based definition to its composition-safe
version.
We showcase our methodology by applying it to a variety of basic cryptographic tasks, including commitments,
digital signatures, zero-knowledge proofs, and oblivious transfer.
While in some cases our derived canonical functionalities are
equivalent to existing formulations, thus attesting to the validity
of our approach, in others they differ, enabling us to ``debug\u27\u27
previous definitions and pinpoint their shortcomings
Efficient, Constant-Round and Actively Secure MPC: Beyond the Three-Party Case
While the feasibility of constant-round and actively secure MPC has been known for over two decades, the last few years have witnessed a flurry of designs and implementations that make its deployment a palpable reality. To our knowledge, however, existing concretely efficient MPC constructions are only for up to three parties.
In this paper we design and implement a new actively secure 5PC protocol tolerating two corruptions that requires rounds of interaction, only uses fast symmetric-key operations, and incurs~60\% less communication than the passively secure state-of-the-art solution from the work of Ben-Efraim, Lindell, and Omri [CCS 2016]. For example, securely evaluating the AES circuit when the parties are in different regions of the U.S. and Europe only takes s which is faster than the passively secure 5PC in the same environment.
Instrumental for our efficiency gains (less interaction, only symmetric key primitives) is a new 4-party primitive we call \emph{Attested OT}, which in addition to Sender and Receiver involves two additional ``assistant parties\u27\u27 who will attest to the respective inputs of both parties, and which might be of broader applicability in practically relevant MPC scenarios. Finally, we also show how to generalize our construction to parties with similar efficiency properties where the corruption threshold is , and propose a combinatorial problem which, if solved optimally, can yield even better corruption thresholds for the same cost
Bootstrapping the Blockchain, with Applications to Consensus and Fast PKI Setup
The Bitcoin backbone protocol [Eurocrypt 2015] extracts
basic properties of Bitcoin\u27s underlying {\em blockchain} data structure, such as ``common prefix\u27\u27 and ``chain quality,\u27\u27 and shows how fundamental applications including consensus and a robust public transaction ledger can be built on top of them. The underlying assumptions are ``proofs of work\u27\u27 (POWs), adversarial hashing power strictly less than {\em and} no adversarial pre-computation---or, alternatively, the existence of an unpredictable ``genesis\u27\u27 block.
In this paper we first show how to remove the latter assumption, presenting a ``bootstrapped\u27\u27 Bitcoin-like blockchain protocol relying on POWs that builds genesis blocks ``from scratch\u27\u27 in the presence of adversarial pre-computation. Importantly, the round complexity of the genesis block generation process is
\emph{independent} of the number of
participants.
Next, we consider applications of our construction, including a PKI generation protocol and a consensus protocol without trusted setup assuming an honest majority (in terms of computational power).
Previous results in the same setting (unauthenticated parties, no trusted setup, POWs)
required a round complexity linear in the number of participants
DIGITAL DIVIDE IN PERUVIAN HIGHER EDUCATION: A POST-PANDEMIC REVIEW
By early 2020, the COVID-19 pandemic paralyzed the world, with the origin being the Chinese city of Wuhan. By March 11, 2020, the World Health Organization (WHO) declared COVID-19 a pandemic. The dire consequences of this disease were reflected in a historic global recession, and in the health field, SARS-CoV-2 had wreaked havoc, especially in the elderly, due to worse manifestations and higher mortality rates. As a result, most countries were able to curb the spread of the virus by imposing mandatory measures such as not leaving their homes and very strict timetables. Among the measures considered was the closure of educational institutions, including universities. At that time, there was a paradigm shift in the way educators had to change the way they taught classes by making use of various online platforms. Online, distance and continuing education learning became a panacea for this unprecedented global pandemic for both educators and students. In this review, a passage is made through the different scenarios experienced by the COVID-19 pandemic, taking into account several objectives such as: the COVID-19, and the case of universities, the educational transition: from face-to-face to virtuality, challenges and opportunities after the pandemic, and last but not least, how the Peruvian State, was able to cope with the pandemic of Covid-19, and the challenges that should continue to be considered after the pandemic
- …