7,324 research outputs found
Measurement dependent locality
The demonstration and use of Bell-nonlocality, a concept that is
fundamentally striking and is at the core of applications in device independent
quantum information processing, relies heavily on the assumption of measurement
independence, also called the assumption of free choice. The latter cannot be
verified or guaranteed. In this paper, we consider a relaxation of the
measurement independence assumption. We briefly review the results of Phys.
Rev. Lett. 113, 190402 (2014), which show that with our relaxation, the set of
so-called measurement dependent local (MDL) correlations is a polytope, i.e. it
can be fully described using a finite set of linear inequalities. Here we
analyze this polytope, first in the simplest case of 2 parties with binary
inputs and outputs, for which we give a full characterization. We show that
partially entangled states are preferable to the maximally entangled state when
dealing with measurement dependence in this scenario. We further present a
method which transforms any Bell-inequality into an MDL inequality and give
valid inequalities for the case of arbitrary number of parties as well as one
for arbitrary number of inputs. We introduce the assumption of independent
sources in the measurement dependence scenario and give a full analysis for the
bipartite scenario with binary inputs and outputs. Finally, we establish a link
between measurement dependence and another strong hindrance in certifying
nonlocal correlations: nondetection events.Comment: 16+7 pages, 2 figure
Upper Bounds on the Rate of Low Density Stabilizer Codes for the Quantum Erasure Channel
Using combinatorial arguments, we determine an upper bound on achievable
rates of stabilizer codes used over the quantum erasure channel. This allows us
to recover the no-cloning bound on the capacity of the quantum erasure channel,
R is below 1-2p, for stabilizer codes: we also derive an improved upper bound
of the form : R is below 1-2p-D(p) with a function D(p) that stays positive for
0 < p < 1/2 and for any family of stabilizer codes whose generators have
weights bounded from above by a constant - low density stabilizer codes.
We obtain an application to percolation theory for a family of self-dual
tilings of the hyperbolic plane. We associate a family of low density
stabilizer codes with appropriate finite quotients of these tilings. We then
relate the probability of percolation to the probability of a decoding error
for these codes on the quantum erasure channel. The application of our upper
bound on achievable rates of low density stabilizer codes gives rise to an
upper bound on the critical probability for these tilings.Comment: 32 page
Asymptotic Preserving numerical schemes for multiscale parabolic problems
We consider a class of multiscale parabolic problems with diffusion
coefficients oscillating in space at a possibly small scale .
Numerical homogenization methods are popular for such problems, because they
capture efficiently the asymptotic behaviour as ,
without using a dramatically fine spatial discretization at the scale of the
fast oscillations. However, known such homogenization schemes are in general
not accurate for both the highly oscillatory regime
and the non oscillatory regime . In this paper, we
introduce an Asymptotic Preserving method based on an exact micro-macro
decomposition of the solution which remains consistent for both regimes.Comment: 7 pages, to appear in C. R. Acad. Sci. Paris; Ser.
A Construction of Quantum LDPC Codes from Cayley Graphs
We study a construction of Quantum LDPC codes proposed by MacKay, Mitchison
and Shokrollahi. It is based on the Cayley graph of Fn together with a set of
generators regarded as the columns of the parity-check matrix of a classical
code. We give a general lower bound on the minimum distance of the Quantum code
in where d is the minimum distance of the classical code.
When the classical code is the repetition code, we are able to
compute the exact parameters of the associated Quantum code which are .Comment: The material in this paper was presented in part at ISIT 2011. This
article is published in IEEE Transactions on Information Theory. We point out
that the second step of the proof of Proposition VI.2 in the published
version (Proposition 25 in the present version and Proposition 18 in the ISIT
extended abstract) is not strictly correct. This issue is addressed in the
present versio
Intensity Correlation between Observations at Differrent Wavelengths for Mkn 501 in 1997
The CAT imaging telescope on the site of the former solar plant Th'emis in
southern France observed gamma-rays from the BL Lac object Mkn501 above 250 GeV
for more than 60 usable hours on-source from March to October 1997. This source
was in a state of high activity during all this period. By studying the
correlation between the photons of different energies detected by the CAT
imaging telescope and by the ASM/RXTE experiment (1.3-12.0 keV) on board the
Rossi X-Ray Timing Explorer, we may constrain the mechanisms which could lead
to the emission of these photons.Comment: Proceedings of the 19th Texas Symposium. 8 pages, 7 figure
Cache policies for cloud-based systems: To keep or not to keep
In this paper, we study cache policies for cloud-based caching. Cloud-based
caching uses cloud storage services such as Amazon S3 as a cache for data items
that would have been recomputed otherwise. Cloud-based caching departs from
classical caching: cloud resources are potentially infinite and only paid when
used, while classical caching relies on a fixed storage capacity and its main
monetary cost comes from the initial investment. To deal with this new context,
we design and evaluate a new caching policy that minimizes the overall cost of
a cloud-based system. The policy takes into account the frequency of
consumption of an item and the cloud cost model. We show that this policy is
easier to operate, that it scales with the demand and that it outperforms
classical policies managing a fixed capacity.Comment: Proceedings of IEEE International Conference on Cloud Computing 2014
(CLOUD 14
Compressive Spectral Clustering
Spectral clustering has become a popular technique due to its high
performance in many contexts. It comprises three main steps: create a
similarity graph between N objects to cluster, compute the first k eigenvectors
of its Laplacian matrix to define a feature vector for each object, and run
k-means on these features to separate objects into k classes. Each of these
three steps becomes computationally intensive for large N and/or k. We propose
to speed up the last two steps based on recent results in the emerging field of
graph signal processing: graph filtering of random signals, and random sampling
of bandlimited graph signals. We prove that our method, with a gain in
computation time that can reach several orders of magnitude, is in fact an
approximation of spectral clustering, for which we are able to control the
error. We test the performance of our method on artificial and real-world
network data.Comment: 12 pages, 2 figure
High numerical aperture holographic microscopy reconstruction with extended z range
An holographic microscopy reconstruction method compatible with high
numerical aperture microscope objective (MO) up to NA=1.4 is proposed. After
off axis and reference field curvature corrections, and after selection of the
+1 grating order holographic image, a phase mask that transforms the optical
elements of the holographic setup into an afocal device is applied in the
camera plane. The reconstruction is then made by the angular spectrum method.
The field is first propagated in the image half space from the camera to the
afocal image of the MO optimal plane (plane for which MO has been designed) by
using a quadratic kernel. The field is then propagated from the MO optimal
plane to the object with the exact kernel. Calibration of the reconstruction is
made by imaging a calibrated object like an USAF resolution target for
different positions along . Once the calibration is done, the reconstruction
can be made with an object located in any plane . The reconstruction method
has been validated experimentally with an USAF target imaged with a NA=1.4
microscope objective. Near-optimal resolution is obtained over an extended
range (m) of locations
Random sampling of bandlimited signals on graphs
We study the problem of sampling k-bandlimited signals on graphs. We propose
two sampling strategies that consist in selecting a small subset of nodes at
random. The first strategy is non-adaptive, i.e., independent of the graph
structure, and its performance depends on a parameter called the graph
coherence. On the contrary, the second strategy is adaptive but yields optimal
results. Indeed, no more than O(k log(k)) measurements are sufficient to ensure
an accurate and stable recovery of all k-bandlimited signals. This second
strategy is based on a careful choice of the sampling distribution, which can
be estimated quickly. Then, we propose a computationally efficient decoder to
reconstruct k-bandlimited signals from their samples. We prove that it yields
accurate reconstructions and that it is also stable to noise. Finally, we
conduct several experiments to test these techniques
- âŠ