1,105 research outputs found
Proposal for optical parity state re-encoder
We propose a re-encoder to generate a refreshed parity encoded state from an
existing parity encoded state. This is the simplest case of the scheme by
Gilchrist et al. (Phys. Rev. A 75, 052328). We show that it is possible to
demonstrate with existing technology parity encoded quantum gates and
teleportation.Comment: 8 pages, 4 figure
Probabilistic Quantum Logic Operations Using Polarizing Beam Splitters
It has previously been shown that probabilistic quantum logic operations can
be performed using linear optical elements, additional photons (ancilla), and
post-selection based on the output of single-photon detectors. Here we describe
the operation of several quantum logic operations of an elementary nature,
including a quantum parity check and a quantum encoder, and we show how they
can be combined to implement a controlled-NOT (CNOT) gate. All of these gates
can be constructed using polarizing beam splitters that completely transmit one
state of polarization and totally reflect the orthogonal state of polarization,
which allows a simple explanation of each operation. We also describe a
polarizing beam splitter implementation of a CNOT gate that is closely
analogous to the quantum teleportation technique previously suggested by
Gottesman and Chuang [Nature 402, p.390 (1999)]. Finally, our approach has the
interesting feature that it makes practical use of a quantum-eraser technique.Comment: 9 pages, RevTex; Submitted to Phys. Rev. A; additional references
inlcude
Loss Tolerant Optical Qubits
We present a linear optics quantum computation scheme that employs a new
encoding approach that incrementally adds qubits and is tolerant to photon loss
errors. The scheme employs a circuit model but uses techniques from cluster
state computation and achieves comparable resource usage. To illustrate our
techniques we describe a quantum memory which is fault tolerant to photon loss
Loss-tolerant operations in parity-code linear optics quantum computing
A heavy focus for optical quantum computing is the introduction of
error-correction, and the minimisation of resource requirements. We detail a
complete encoding and manipulation scheme designed for linear optics quantum
computing, incorporating scalable operations and loss-tolerant architecture.Comment: 8 pages, 6 figure
Protecting Quantum Information with Entanglement and Noisy Optical Modes
We incorporate active and passive quantum error-correcting techniques to
protect a set of optical information modes of a continuous-variable quantum
information system. Our method uses ancilla modes, entangled modes, and gauge
modes (modes in a mixed state) to help correct errors on a set of information
modes. A linear-optical encoding circuit consisting of offline squeezers,
passive optical devices, feedforward control, conditional modulation, and
homodyne measurements performs the encoding. The result is that we extend the
entanglement-assisted operator stabilizer formalism for discrete variables to
continuous-variable quantum information processing.Comment: 7 pages, 1 figur
High-Fidelity Z-Measurement Error Correction of Optical Qubits
We demonstrate a quantum error correction scheme that protects against
accidental measurement, using an encoding where the logical state of a single
qubit is encoded into two physical qubits using a non-deterministic photonic
CNOT gate. For the single qubit input states |0>, |1>, |0>+|1>, |0>-|1>,
|0>+i|1>, and |0>-i|1> our encoder produces the appropriate 2-qubit encoded
state with an average fidelity of 0.88(3) and the single qubit decoded states
have an average fidelity of 0.93(5) with the original state. We are able to
decode the 2-qubit state (up to a bit flip) by performing a measurement on one
of the qubits in the logical basis; we find that the 64 1-qubit decoded states
arising from 16 real and imaginary single qubit superposition inputs have an
average fidelity of 0.96(3).Comment: 4 pages, 4 figures, comments welcom
Geometry meets semantics for semi-supervised monocular depth estimation
Depth estimation from a single image represents a very exciting challenge in
computer vision. While other image-based depth sensing techniques leverage on
the geometry between different viewpoints (e.g., stereo or structure from
motion), the lack of these cues within a single image renders ill-posed the
monocular depth estimation task. For inference, state-of-the-art
encoder-decoder architectures for monocular depth estimation rely on effective
feature representations learned at training time. For unsupervised training of
these models, geometry has been effectively exploited by suitable images
warping losses computed from views acquired by a stereo rig or a moving camera.
In this paper, we make a further step forward showing that learning semantic
information from images enables to improve effectively monocular depth
estimation as well. In particular, by leveraging on semantically labeled images
together with unsupervised signals gained by geometry through an image warping
loss, we propose a deep learning approach aimed at joint semantic segmentation
and depth estimation. Our overall learning framework is semi-supervised, as we
deploy groundtruth data only in the semantic domain. At training time, our
network learns a common feature representation for both tasks and a novel
cross-task loss function is proposed. The experimental findings show how,
jointly tackling depth prediction and semantic segmentation, allows to improve
depth estimation accuracy. In particular, on the KITTI dataset our network
outperforms state-of-the-art methods for monocular depth estimation.Comment: 16 pages, Accepted to ACCV 201
- …