10,975 research outputs found
Tight Bounds for Set Disjointness in the Message Passing Model
In a multiparty message-passing model of communication, there are
players. Each player has a private input, and they communicate by sending
messages to one another over private channels. While this model has been used
extensively in distributed computing and in multiparty computation, lower
bounds on communication complexity in this model and related models have been
somewhat scarce. In recent work \cite{phillips12,woodruff12,woodruff13}, strong
lower bounds of the form were obtained for several
functions in the message-passing model; however, a lower bound on the classical
Set Disjointness problem remained elusive.
In this paper, we prove tight lower bounds of the form
for the Set Disjointness problem in the message passing model. Our bounds are
obtained by developing information complexity tools in the message-passing
model, and then proving an information complexity lower bound for Set
Disjointness. As a corollary, we show a tight lower bound for the task
allocation problem \cite{DruckerKuhnOshman} via a reduction from Set
Disjointness
Entanglement, randomness and chaos
Entanglement is not only the most intriguing feature of quantum mechanics,
but also a key resource in quantum information science. The entanglement
content of random pure quantum states is almost maximal; such states find
applications in various quantum information protocols. The preparation of a
random state or, equivalently, the implementation of a random unitary operator,
requires a number of elementary one- and two-qubit gates that is exponential in
the number n_q of qubits, thus becoming rapidly unfeasible when increasing n_q.
On the other hand, pseudo-random states approximating to the desired accuracy
the entanglement properties of true random states may be generated efficiently,
that is, polynomially in n_q. In particular, quantum chaotic maps are efficient
generators of multipartite entanglement among the qubits, close to that
expected for random states. This review discusses several aspects of the
relationship between entanglement, randomness and chaos. In particular, I will
focus on the following items: (i) the robustness of the entanglement generated
by quantum chaotic maps when taking into account the unavoidable noise sources
affecting a quantum computer; (ii) the detection of the entanglement of
high-dimensional (mixtures of) random states, an issue also related to the
question of the emergence of classicality in coarse grained quantum chaotic
dynamics; (iii) the decoherence induced by the coupling of a system to a
chaotic environment, that is, by the entanglement established between the
system and the environment.Comment: Review paper, 40 pages, 7 figures, added reference
Error Bounds for Piecewise Smooth and Switching Regression
The paper deals with regression problems, in which the nonsmooth target is
assumed to switch between different operating modes. Specifically, piecewise
smooth (PWS) regression considers target functions switching deterministically
via a partition of the input space, while switching regression considers
arbitrary switching laws. The paper derives generalization error bounds in
these two settings by following the approach based on Rademacher complexities.
For PWS regression, our derivation involves a chaining argument and a
decomposition of the covering numbers of PWS classes in terms of the ones of
their component functions and the capacity of the classifier partitioning the
input space. This yields error bounds with a radical dependency on the number
of modes. For switching regression, the decomposition can be performed directly
at the level of the Rademacher complexities, which yields bounds with a linear
dependency on the number of modes. By using once more chaining and a
decomposition at the level of covering numbers, we show how to recover a
radical dependency. Examples of applications are given in particular for PWS
and swichting regression with linear and kernel-based component functions.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice,after which this version may no
longer be accessibl
Syntactic Structures and Code Parameters
We assign binary and ternary error-correcting codes to the data of syntactic
structures of world languages and we study the distribution of code points in
the space of code parameters. We show that, while most codes populate the lower
region approximating a superposition of Thomae functions, there is a
substantial presence of codes above the Gilbert-Varshamov bound and even above
the asymptotic bound and the Plotkin bound. We investigate the dynamics induced
on the space of code parameters by spin glass models of language change, and
show that, in the presence of entailment relations between syntactic parameters
the dynamics can sometimes improve the code. For large sets of languages and
syntactic data, one can gain information on the spin glass dynamics from the
induced dynamics in the space of code parameters.Comment: 14 pages, LaTeX, 12 png figure
An entropy stable discontinuous Galerkin method for the shallow water equations on curvilinear meshes with wet/dry fronts accelerated by GPUs
We extend the entropy stable high order nodal discontinuous Galerkin spectral
element approximation for the non-linear two dimensional shallow water
equations presented by Wintermeyer et al. [N. Wintermeyer, A. R. Winters, G. J.
Gassner, and D. A. Kopriva. An entropy stable nodal discontinuous Galerkin
method for the two dimensional shallow water equations on unstructured
curvilinear meshes with discontinuous bathymetry. Journal of Computational
Physics, 340:200-242, 2017] with a shock capturing technique and a positivity
preservation capability to handle dry areas. The scheme preserves the entropy
inequality, is well-balanced and works on unstructured, possibly curved,
quadrilateral meshes. For the shock capturing, we introduce an artificial
viscosity to the equations and prove that the numerical scheme remains entropy
stable. We add a positivity preserving limiter to guarantee non-negative water
heights as long as the mean water height is non-negative. We prove that
non-negative mean water heights are guaranteed under a certain additional time
step restriction for the entropy stable numerical interface flux. We implement
the method on GPU architectures using the abstract language OCCA, a unified
approach to multi-threading languages. We show that the entropy stable scheme
is well suited to GPUs as the necessary extra calculations do not negatively
impact the runtime up to reasonably high polynomial degrees (around ). We
provide numerical examples that challenge the shock capturing and positivity
properties of our scheme to verify our theoretical findings
Non-classical computing: feasible versus infeasible
Physics sets certain limits on what is and is not computable. These limits are very far from having been reached by current technologies. Whilst proposals for hypercomputation are almost certainly infeasible, there are a number of non classical approaches that do hold considerable promise. There are a range of possible architectures that could be implemented on silicon that are distinctly different from the von Neumann model. Beyond this, quantum simulators, which are the quantum equivalent of analogue computers, may be constructable in the near future
Concave Switching in Single and Multihop Networks
Switched queueing networks model wireless networks, input queued switches and
numerous other networked communications systems. For single-hop networks, we
consider a {()-switch policy} which combines the MaxWeight policies
with bandwidth sharing networks -- a further well studied model of Internet
congestion. We prove the maximum stability property for this class of
randomized policies. Thus these policies have the same first order behavior as
the MaxWeight policies. However, for multihop networks some of these
generalized polices address a number of critical weakness of the
MaxWeight/BackPressure policies.
For multihop networks with fixed routing, we consider the Proportional
Scheduler (or (1,log)-policy). In this setting, the BackPressure policy is
maximum stable, but must maintain a queue for every route-destination, which
typically grows rapidly with a network's size. However, this proportionally
fair policy only needs to maintain a queue for each outgoing link, which is
typically bounded in number. As is common with Internet routing, by maintaining
per-link queueing each node only needs to know the next hop for each packet and
not its entire route. Further, in contrast to BackPressure, the Proportional
Scheduler does not compare downstream queue lengths to determine weights, only
local link information is required. This leads to greater potential for
decomposed implementations of the policy. Through a reduction argument and an
entropy argument, we demonstrate that, whilst maintaining substantially less
queueing overhead, the Proportional Scheduler achieves maximum throughput
stability.Comment: 28 page
Using quantum key distribution for cryptographic purposes: a survey
The appealing feature of quantum key distribution (QKD), from a cryptographic
viewpoint, is the ability to prove the information-theoretic security (ITS) of
the established keys. As a key establishment primitive, QKD however does not
provide a standalone security service in its own: the secret keys established
by QKD are in general then used by a subsequent cryptographic applications for
which the requirements, the context of use and the security properties can
vary. It is therefore important, in the perspective of integrating QKD in
security infrastructures, to analyze how QKD can be combined with other
cryptographic primitives. The purpose of this survey article, which is mostly
centered on European research results, is to contribute to such an analysis. We
first review and compare the properties of the existing key establishment
techniques, QKD being one of them. We then study more specifically two generic
scenarios related to the practical use of QKD in cryptographic infrastructures:
1) using QKD as a key renewal technique for a symmetric cipher over a
point-to-point link; 2) using QKD in a network containing many users with the
objective of offering any-to-any key establishment service. We discuss the
constraints as well as the potential interest of using QKD in these contexts.
We finally give an overview of challenges relative to the development of QKD
technology that also constitute potential avenues for cryptographic research.Comment: Revised version of the SECOQC White Paper. Published in the special
issue on QKD of TCS, Theoretical Computer Science (2014), pp. 62-8
- âŠ