2,003 research outputs found
Insecurity of position-based quantum cryptography protocols against entanglement attacks
Recently, position-based quantum cryptography has been claimed to be
unconditionally secure. In contrary, here we show that the existing proposals
for position-based quantum cryptography are, in fact, insecure if entanglement
is shared among two adversaries. Specifically, we demonstrate how the
adversaries can incorporate ideas of quantum teleportation and quantum secret
sharing to compromise the security with certainty. The common flaw to all
current protocols is that the Pauli operators always map a codeword to a
codeword (up to an irrelevant overall phase). We propose a modified scheme
lacking this property in which the same cheating strategy used to undermine the
previous protocols can succeed with a rate at most 85%. We conjecture that the
modified protocol is unconditionally secure and prove this to be true when the
shared quantum resource between the adversaries is a two- or three- level
system
On-Line Portfolio Selection with Moving Average Reversion
On-line portfolio selection has attracted increasing interests in machine
learning and AI communities recently. Empirical evidences show that stock's
high and low prices are temporary and stock price relatives are likely to
follow the mean reversion phenomenon. While the existing mean reversion
strategies are shown to achieve good empirical performance on many real
datasets, they often make the single-period mean reversion assumption, which is
not always satisfied in some real datasets, leading to poor performance when
the assumption does not hold. To overcome the limitation, this article proposes
a multiple-period mean reversion, or so-called Moving Average Reversion (MAR),
and a new on-line portfolio selection strategy named "On-Line Moving Average
Reversion" (OLMAR), which exploits MAR by applying powerful online learning
techniques. From our empirical results, we found that OLMAR can overcome the
drawback of existing mean reversion algorithms and achieve significantly better
results, especially on the datasets where the existing mean reversion
algorithms failed. In addition to superior trading performance, OLMAR also runs
extremely fast, further supporting its practical applicability to a wide range
of applications.Comment: ICML201
Are the Kepler Near-Resonance Planet Pairs due to Tidal Dissipation?
The multiple-planet systems discovered by the Kepler mission show an excess
of planet pairs with period ratios just wide of exact commensurability for
first-order resonances like 2:1 and 3:2. In principle, these planet pairs could
have both resonance angles associated with the resonance librating if the
orbital eccentricities are sufficiently small, because the width of first-order
resonances diverges in the limit of vanishingly small eccentricity. We consider
a widely-held scenario in which pairs of planets were captured into first-order
resonances by migration due to planet-disk interactions, and subsequently
became detached from the resonances, due to tidal dissipation in the planets.
In the context of this scenario, we find a constraint on the ratio of the
planet's tidal dissipation function and Love number that implies that some of
the Kepler planets are likely solid. However, tides are not strong enough to
move many of the planet pairs to the observed separations, suggesting that
additional dissipative processes are at play.Comment: 20 pages, including 7 figures; accepted for publication in Ap
Prototypical Contrastive Learning of Unsupervised Representations
This paper presents Prototypical Contrastive Learning (PCL), an unsupervised
representation learning method that addresses the fundamental limitations of
instance-wise contrastive learning. PCL not only learns low-level features for
the task of instance discrimination, but more importantly, it implicitly
encodes semantic structures of the data into the learned embedding space.
Specifically, we introduce prototypes as latent variables to help find the
maximum-likelihood estimation of the network parameters in an
Expectation-Maximization framework. We iteratively perform E-step as finding
the distribution of prototypes via clustering and M-step as optimizing the
network via contrastive learning. We propose ProtoNCE loss, a generalized
version of the InfoNCE loss for contrastive learning, which encourages
representations to be closer to their assigned prototypes. PCL outperforms
state-of-the-art instance-wise contrastive learning methods on multiple
benchmarks with substantial improvement in low-resource transfer learning. Code
and pretrained models are available at https://github.com/salesforce/PCL
- …