106,451 research outputs found
Phase-Remapping Attack in Practical Quantum Key Distribution Systems
Quantum key distribution (QKD) can be used to generate secret keys between
two distant parties. Even though QKD has been proven unconditionally secure
against eavesdroppers with unlimited computation power, practical
implementations of QKD may contain loopholes that may lead to the generated
secret keys being compromised. In this paper, we propose a phase-remapping
attack targeting two practical bidirectional QKD systems (the "plug & play"
system and the Sagnac system). We showed that if the users of the systems are
unaware of our attack, the final key shared between them can be compromised in
some situations. Specifically, we showed that, in the case of the
Bennett-Brassard 1984 (BB84) protocol with ideal single-photon sources, when
the quantum bit error rate (QBER) is between 14.6% and 20%, our attack renders
the final key insecure, whereas the same range of QBER values has been proved
secure if the two users are unaware of our attack; also, we demonstrated three
situations with realistic devices where positive key rates are obtained without
the consideration of Trojan horse attacks but in fact no key can be distilled.
We remark that our attack is feasible with only current technology. Therefore,
it is very important to be aware of our attack in order to ensure absolute
security. In finding our attack, we minimize the QBER over individual
measurements described by a general POVM, which has some similarity with the
standard quantum state discrimination problem.Comment: 13 pages, 8 figure
Discriminated Belief Propagation
Near optimal decoding of good error control codes is generally a difficult
task. However, for a certain type of (sufficiently) good codes an efficient
decoding algorithm with near optimal performance exists. These codes are
defined via a combination of constituent codes with low complexity trellis
representations. Their decoding algorithm is an instance of (loopy) belief
propagation and is based on an iterative transfer of constituent beliefs. The
beliefs are thereby given by the symbol probabilities computed in the
constituent trellises. Even though weak constituent codes are employed close to
optimal performance is obtained, i.e., the encoder/decoder pair (almost)
achieves the information theoretic capacity. However, (loopy) belief
propagation only performs well for a rather specific set of codes, which limits
its applicability.
In this paper a generalisation of iterative decoding is presented. It is
proposed to transfer more values than just the constituent beliefs. This is
achieved by the transfer of beliefs obtained by independently investigating
parts of the code space. This leads to the concept of discriminators, which are
used to improve the decoder resolution within certain areas and defines
discriminated symbol beliefs. It is shown that these beliefs approximate the
overall symbol probabilities. This leads to an iteration rule that (below
channel capacity) typically only admits the solution of the overall decoding
problem. Via a Gauss approximation a low complexity version of this algorithm
is derived. Moreover, the approach may then be applied to a wide range of
channel maps without significant complexity increase
Fairness Testing: Testing Software for Discrimination
This paper defines software fairness and discrimination and develops a
testing-based method for measuring if and how much software discriminates,
focusing on causality in discriminatory behavior. Evidence of software
discrimination has been found in modern software systems that recommend
criminal sentences, grant access to financial products, and determine who is
allowed to participate in promotions. Our approach, Themis, generates efficient
test suites to measure discrimination. Given a schema describing valid system
inputs, Themis generates discrimination tests automatically and does not
require an oracle. We evaluate Themis on 20 software systems, 12 of which come
from prior work with explicit focus on avoiding discrimination. We find that
(1) Themis is effective at discovering software discrimination, (2)
state-of-the-art techniques for removing discrimination from algorithms fail in
many situations, at times discriminating against as much as 98% of an input
subdomain, (3) Themis optimizations are effective at producing efficient test
suites for measuring discrimination, and (4) Themis is more efficient on
systems that exhibit more discrimination. We thus demonstrate that fairness
testing is a critical aspect of the software development cycle in domains with
possible discrimination and provide initial tools for measuring software
discrimination.Comment: Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017. Fairness
Testing: Testing Software for Discrimination. In Proceedings of 2017 11th
Joint Meeting of the European Software Engineering Conference and the ACM
SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE),
Paderborn, Germany, September 4-8, 2017 (ESEC/FSE'17).
https://doi.org/10.1145/3106237.3106277, ESEC/FSE, 201
Programmable quantum state discriminator by Nuclear Magnetic Resonance
In this paper a programmable quantum state discriminator is implemented by
using nuclear magnetic resonance. We use a two qubit spin-1/2 system, one for
the data qubit and one for the ancilla (programme) qubit. This device does the
unambiguous (error free) discrimination of pair of states of the data qubit
that are symmetrically located about a fixed state. The device is used to
discriminate both, linearly polarized states and elliptically polarized states.
The maximum probability of the successful discrimination is achieved by
suitably preparing the ancilla qubit. It is also shown that, the probability of
discrimination depends on angle of unitary operator of the protocol and
ellipticity of the data qubit state.Comment: 22 pages and 9 figure
Learning to Discriminate Through Long-Term Changes of Dynamical Synaptic Transmission
Short-term synaptic plasticity is modulated by long-term synaptic
changes. There is, however, no general agreement on the computational
role of this interaction. Here, we derive a learning rule for the release
probability and the maximal synaptic conductance in a circuit model
with combined recurrent and feedforward connections that allows learning
to discriminate among natural inputs. Short-term synaptic plasticity
thereby provides a nonlinear expansion of the input space of a linear
classifier, whereas the random recurrent network serves to decorrelate
the expanded input space. Computer simulations reveal that the twofold
increase in the number of input dimensions through short-term synaptic
plasticity improves the performance of a standard perceptron up to 100%.
The distributions of release probabilities and maximal synaptic conductances
at the capacity limit strongly depend on the balance between excitation
and inhibition. The model also suggests a new computational
interpretation of spikes evoked by stimuli outside the classical receptive
field. These neuronal activitiesmay reflect decorrelation of the expanded
stimulus space by intracortical synaptic connections
Setting-up early computer programs: D. H. Lehmer's ENIAC computation
A complete reconstruction of Lehmer's ENIAC set-up for computing the exponents of p modulo two is given. This program served as an early test program for the ENIAC (1946). The reconstruction illustrates the difficulties of early programmers to find a way between a man operated and a machine operated computation. These difficulties concern both the content level (the algorithm) and the formal level (the logic of sequencing operations)
- …