4,227 research outputs found
Best Possible Information-Theoretic MPC
We reconsider the security guarantee that can be achieved by general protocols for secure multiparty computation in the most basic of settings: information-theoretic security against a semi-honest adversary.
Since the 1980s, we have elegant solutions to this problem that offer full security, as long as the adversary controls a minority of the parties, but fail completely when that threshold is crossed. In this work, we revisit this problem, questioning the optimality of the standard notion of security. We put forward a new notion of information-theoretic security which is strictly stronger than the standard one, and which we argue to be ``best possible.\u27\u27 Our new notion still requires full security against dishonest minority in the usual sense, but also requires a meaningful notion of information-theoretic security against dishonest majority.
We present protocols for useful classes of functions that satisfy this new notion of security. Our protocols have the unique feature of combining the efficiency benefits of protocols for an honest majority and (most of) the security benefits of protocols for dishonest majority. We further extend some of the solutions to the malicious setting
ARPA Whitepaper
We propose a secure computation solution for blockchain networks. The
correctness of computation is verifiable even under malicious majority
condition using information-theoretic Message Authentication Code (MAC), and
the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty
computation protocol and a layer2 solution, our privacy-preserving computation
guarantees data security on blockchain, cryptographically, while reducing the
heavy-lifting computation job to a few nodes. This breakthrough has several
implications on the future of decentralized networks. First, secure computation
can be used to support Private Smart Contracts, where consensus is reached
without exposing the information in the public contract. Second, it enables
data to be shared and used in trustless network, without disclosing the raw
data during data-at-use, where data ownership and data usage is safely
separated. Last but not least, computation and verification processes are
separated, which can be perceived as computational sharding, this effectively
makes the transaction processing speed linear to the number of participating
nodes. Our objective is to deploy our secure computation network as an layer2
solution to any blockchain system. Smart Contracts\cite{smartcontract} will be
used as bridge to link the blockchain and computation networks. Additionally,
they will be used as verifier to ensure that outsourced computation is
completed correctly. In order to achieve this, we first develop a general MPC
network with advanced features, such as: 1) Secure Computation, 2) Off-chain
Computation, 3) Verifiable Computation, and 4)Support dApps' needs like
privacy-preserving data exchange
Comparing cosmic web classifiers using information theory
We introduce a decision scheme for optimally choosing a classifier, which
segments the cosmic web into different structure types (voids, sheets,
filaments, and clusters). Our framework, based on information theory, accounts
for the design aims of different classes of possible applications: (i)
parameter inference, (ii) model selection, and (iii) prediction of new
observations. As an illustration, we use cosmographic maps of web-types in the
Sloan Digital Sky Survey to assess the relative performance of the classifiers
T-web, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web,
(ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our
study substantiates a data-supported connection between cosmic web analysis and
information theory, and paves the path towards principled design of analysis
procedures for the next generation of galaxy surveys. We have made the cosmic
web maps, galaxy catalog, and analysis scripts used in this work publicly
available.Comment: 20 pages, 8 figures, 6 tables. Matches JCAP published version. Public
data available from the first author's website (currently
http://icg.port.ac.uk/~leclercq/
Partial Information Decomposition as a Unified Approach to the Specification of Neural Goal Functions
In many neural systems anatomical motifs are present repeatedly, but despite
their structural similarity they can serve very different tasks. A prime
example for such a motif is the canonical microcircuit of six-layered
neo-cortex, which is repeated across cortical areas, and is involved in a
number of different tasks (e.g.sensory, cognitive, or motor tasks). This
observation has spawned interest in finding a common underlying principle, a
'goal function', of information processing implemented in this structure. By
definition such a goal function, if universal, cannot be cast in
processing-domain specific language (e.g. 'edge filtering', 'working memory').
Thus, to formulate such a principle, we have to use a domain-independent
framework. Information theory offers such a framework. However, while the
classical framework of information theory focuses on the relation between one
input and one output (Shannon's mutual information), we argue that neural
information processing crucially depends on the combination of
\textit{multiple} inputs to create the output of a processor. To account for
this, we use a very recent extension of Shannon Information theory, called
partial information decomposition (PID). PID allows to quantify the information
that several inputs provide individually (unique information), redundantly
(shared information) or only jointly (synergistic information) about the
output. First, we review the framework of PID. Then we apply it to reevaluate
and analyze several earlier proposals of information theoretic neural goal
functions (predictive coding, infomax, coherent infomax, efficient coding). We
find that PID allows to compare these goal functions in a common framework, and
also provides a versatile approach to design new goal functions from first
principles. Building on this, we design and analyze a novel goal function,
called 'coding with synergy'. [...]Comment: 21 pages, 4 figures, appendi
Perfectly-Secure Synchronous MPC with Asynchronous Fallback Guarantees
Secure multi-party computation (MPC) is a fundamental problem in secure distributed computing. An MPC protocol allows a set of mutually distrusting parties to carry out any joint computation of their private inputs, without disclosing any additional information about their inputs. MPC with information-theoretic security (also called unconditional security) provides the strongest security guarantees and remains secure even against computationally unbounded adversaries. Perfectly-secure MPC protocols is a class of information-theoretically secure MPC protocols, which provides all the security guarantees in an error-free fashion. The focus of this work is perfectly-secure MPC. Known protocols are designed assuming either a synchronous or an asynchronous communication network. It is well known that perfectly-secure synchronous MPC protocol is possible as long as adversary can corrupt any parties. On the other hand, perfectly-secure asynchronous MPC protocol can tolerate up to corrupt parties. A natural question is does there exist a single MPC protocol for the setting where the parties are not aware of the exact network type and which can tolerate up to corruptions in a synchronous network and up to corruptions in an asynchronous network. We design such a best-of-both-worlds perfectly-secure MPC protocol, provided holds.
For designing our protocol, we design two important building blocks, which are of independent interest. The first building block is a best-of-both-worlds Byzantine agreement (BA) protocol tolerating corruptions and which remains secure, both in a synchronous as well as asynchronous network. The second building block is a polynomial-based best-of-both-worlds verifiable secret-sharing (VSS) protocol, which can tolerate up to and corruptions in a synchronous and in an asynchronous network respectively
Comparing Infrared Dirac-Born-Infeld Brane Inflation to Observations
We compare the Infrared Dirac-Born-Infeld (IR DBI) brane inflation model to
observations using a Bayesian analysis. The current data cannot distinguish it
from the \LambdaCDM model, but is able to give interesting constraints on
various microscopic parameters including the mass of the brane moduli
potential, the fundamental string scale, the charge or warp factor of throats,
and the number of the mobile branes. We quantify some distinctive testable
predictions with stringy signatures, such as the large non-Gaussianity, and the
large, but regional, running of the spectral index. These results illustrate
how we may be able to probe aspects of string theory using cosmological
observations.Comment: 54 pages, 13 figures. v2: non-Gaussianity constraint has been applied
to the model; parameter constraints have tightened significantly, conclusions
unchanged. References added; v3, minor revision, PRD versio
- …