33,825 research outputs found
An information-theoretic view of network management
We present an information-theoretic framework for network management for recovery from nonergodic link failures. Building on recent work in the field of network coding, we describe the input-output relations of network nodes in terms of network codes. This very general concept of network behavior as a code provides a way to quantify essential management information as that needed to switch among different codes (behaviors) for different failure scenarios. We compare two types of recovery schemes, receiver-based and network-wide, and consider two formulations for quantifying network management. The first is a centralized formulation where network behavior is described by an overall code determining the behavior of every node, and the management requirement is taken as the logarithm of the number of such codes that the network may switch among. For this formulation, we give bounds, many of which are tight, on management requirements for various network connection problems in terms of basic parameters such as the number of source processes and the number of links in a minimum source-receiver cut. Our results include a lower bound for arbitrary connections and an upper bound for multitransmitter multicast connections, for linear receiver-based and network-wide recovery from all single link failures. The second is a node-based formulation where the management requirement is taken as the sum over all nodes of the logarithm of the number of different behaviors for each node. We show that the minimum node-based requirement for failures of links adjacent to a single receiver is achieved with receiver-based schemes
Security in Locally Repairable Storage
In this paper we extend the notion of {\em locally repairable} codes to {\em
secret sharing} schemes. The main problem that we consider is to find optimal
ways to distribute shares of a secret among a set of storage-nodes
(participants) such that the content of each node (share) can be recovered by
using contents of only few other nodes, and at the same time the secret can be
reconstructed by only some allowable subsets of nodes. As a special case, an
eavesdropper observing some set of specific nodes (such as less than certain
number of nodes) does not get any information. In other words, we propose to
study a locally repairable distributed storage system that is secure against a
{\em passive eavesdropper} that can observe some subsets of nodes.
We provide a number of results related to such systems including upper-bounds
and achievability results on the number of bits that can be securely stored
with these constraints.Comment: This paper has been accepted for publication in IEEE Transactions of
Information Theor
When is a Function Securely Computable?
A subset of a set of terminals that observe correlated signals seek to
compute a given function of the signals using public communication. It is
required that the value of the function be kept secret from an eavesdropper
with access to the communication. We show that the function is securely
computable if and only if its entropy is less than the "aided secret key"
capacity of an associated secrecy generation model, for which a single-letter
characterization is provided
The Limitations of Optimization from Samples
In this paper we consider the following question: can we optimize objective
functions from the training data we use to learn them? We formalize this
question through a novel framework we call optimization from samples (OPS). In
OPS, we are given sampled values of a function drawn from some distribution and
the objective is to optimize the function under some constraint.
While there are interesting classes of functions that can be optimized from
samples, our main result is an impossibility. We show that there are classes of
functions which are statistically learnable and optimizable, but for which no
reasonable approximation for optimization from samples is achievable. In
particular, our main result shows that there is no constant factor
approximation for maximizing coverage functions under a cardinality constraint
using polynomially-many samples drawn from any distribution.
We also show tight approximation guarantees for maximization under a
cardinality constraint of several interesting classes of functions including
unit-demand, additive, and general monotone submodular functions, as well as a
constant factor approximation for monotone submodular functions with bounded
curvature
Recoverable Information and Emergent Conservation Laws in Fracton Stabilizer Codes
We introduce a new quantity, that we term recoverable information, defined
for stabilizer Hamiltonians. For such models, the recoverable information
provides a measure of the topological information, as well as a physical
interpretation, which is complementary to topological entanglement entropy. We
discuss three different ways to calculate the recoverable information, and
prove their equivalence. To demonstrate its utility, we compute recoverable
information for fracton models using all three methods where appropriate. From
the recoverable information, we deduce the existence of emergent
Gauss-law type constraints, which in turn imply emergent conservation
laws for point-like quasiparticle excitations of an underlying topologically
ordered phase.Comment: Added additional cluster model calculation (SPT example) and a new
section discussing the general benefits of recoverable informatio
- …