3,885 research outputs found
Control of electron spin decoherence caused by electron-nuclear spin dynamics in a quantum dot
Control of electron spin decoherence in contact with a mesoscopic bath of
many interacting nuclear spins in an InAs quantum dot is studied by solving the
coupled quantum dynamics. The nuclear spin bath, because of its bifurcated
evolution predicated on the electron spin up or down state, measures the
which-state information of the electron spin and hence diminishes its
coherence. The many-body dynamics of nuclear spin bath is solved with a
pair-correlation approximation. In the relevant timescale, nuclear pair-wise
flip-flops, as elementary excitations in the mesoscopic bath, can be mapped
into the precession of non-interacting pseudo-spins. Such mapping provides a
geometrical picture for understanding the decoherence and for devising control
schemes. A close examination of nuclear bath dynamics reveals a wealth of
phenomena and new possibilities of controlling the electron spin decoherence.
For example, when the electron spin is flipped by a -pulse at , its
coherence will partially recover at as a consequence of quantum
disentanglement from the mesoscopic bath. In contrast to the re-focusing of
inhomogeneously broadened phases by conventional spin-echoes, the
disentanglement is realized through shepherding quantum evolution of the bath
state via control of the quantum object. A concatenated construction of pulse
sequences can eliminate the decoherence with arbitrary accuracy, with the
nuclear-nuclear spin interaction strength acting as the controlling small
parameter
Aprendizaje de representaciones desenredadas de escenas a partir de imágenes.
Artificial intelligence is at the forefront of a technological revolution, in particular as a key component to build autonomous agents. However, not only training such agents come at a great computational cost, but they also end up lacking human basic abilities like generalization, information extrapolation, knowledge transfer between contexts, or improvisation. To overcome current limitations, agents need a deeper understanding of their environment, and more efficiently learning it from data. There are very recent works that propose novel approaches to learn representations of the world: instead of learning invariant object encodings, they learn to isolate, or disentangle, the different variable properties which form an object. This would not only enable agents to understand object changes as modifications of one of their properties, but also to transfer such knowledge on the properties between different categories. This Master Thesis aims to develop a new machine learning model for disentangling object properties on monocular images of scenes. Our model is based on a state-of-the-art architecture for disentangled representations learning, and our goal is to reduce the computational complexity of the base model while also improving its performance. To achieve this, we will replace a recursive unsupervised segmentation network by an encoder-decoder segmentation network. Furthermore, before training such overparametrized neural model without supervision, we will profit from transfer learning of pre-trained weights from a supervised segmentation task. After developing a first vanilla model, we have tuned it to improve its performance and generalization capability. Then, an experimental validation has been performed on two commonly used synthetic datasets, evaluating both its disentanglement performance and computational efficiency, and on a more realistic dataset to analyze the model capability on real data. The results show that our model outperforms the state of the art, while reducing its computational footprint. Nevertheless, further research is needed to bridge the gap with real world applications.<br /
A Survey of Methods, Challenges and Perspectives in Causality
Deep Learning models have shown success in a large variety of tasks by
extracting correlation patterns from high-dimensional data but still struggle
when generalizing out of their initial distribution. As causal engines aim to
learn mechanisms independent from a data distribution, combining Deep Learning
with Causality can have a great impact on the two fields. In this paper, we
further motivate this assumption. We perform an extensive overview of the
theories and methods for Causality from different perspectives, with an
emphasis on Deep Learning and the challenges met by the two domains. We show
early attempts to bring the fields together and the possible perspectives for
the future. We finish by providing a large variety of applications for
techniques from Causality.Comment: 40 pages, 37 pages for the main paper and 3 pages for the supplement,
8 figures, submitted to ACM Computing Survey
Nonlinear Independent Component Analysis for Principled Disentanglement in Unsupervised Deep Learning
A central problem in unsupervised deep learning is how to find useful
representations of high-dimensional data, sometimes called "disentanglement".
Most approaches are heuristic and lack a proper theoretical foundation. In
linear representation learning, independent component analysis (ICA) has been
successful in many applications areas, and it is principled, i.e. based on a
well-defined probabilistic model. However, extension of ICA to the nonlinear
case has been problematic due to the lack of identifiability, i.e. uniqueness
of the representation. Recently, nonlinear extensions that utilize temporal
structure or some auxiliary information have been proposed. Such models are in
fact identifiable, and consequently, an increasing number of algorithms have
been developed. In particular, some self-supervised algorithms can be shown to
estimate nonlinear ICA, even though they have initially been proposed from
heuristic perspectives. This paper reviews the state-of-the-art of nonlinear
ICA theory and algorithms
Testing the Dark Matter Interpretation of the PAMELA Excess through Measurements of the Galactic Diffuse Emission
We propose to test the dark matter (DM) interpretation of the positron excess
observed by the PAMELA cosmic-ray (CR) detector through the identification of a
Galactic diffuse gamma-ray component associated to DM-induced prompt and
radiative emission. The goal is to present an analysis based on minimal sets of
assumptions and extrapolations with respect to locally testable or measurable
quantities. We discuss the differences between the spatial and spectral
features for the DM-induced components (with an extended, possibly spherical,
source function) and those for the standard CR contribution (with sources
confined within the stellar disc), and propose to focus on intermediate and
large latitudes. We address the dependence of the signal to background ratio on
the model adopted to describe the propagation of charged CRs in the Galaxy, and
find that, in general, the DM-induced signal can be detected by the Fermi
Gamma-ray Space Telescope at energies above 100 GeV. An observational result in
agreement with the prediction from standard CR components only, would imply
very strong constraints on the DM interpretation of the PAMELA excess. On the
other hand, if an excess in the diffuse emission above 100 GeV is identified,
the angular profile for such emission would allow for a clean disentanglement
between the DM interpretation and astrophysical explanations proposed for the
PAMELA excess. We also compare to the radiative diffuse emission at lower
frequencies, sketching in particular the detection prospects at infrared
frequencies with the Planck satellite.Comment: new benchmark models for dark matter and cosmic-ray introduced, few
comments and references added, conclusion unchange
- …