80,586 research outputs found
Anderson Localization of Polar Eigenmodes in Random Planar Composites
Anderson localization of classical waves in disordered media is a fundamental
physical phenomenon that has attracted attention in the past three decades.
More recently, localization of polar excitations in nanostructured
metal-dielectric films (also known as random planar composite) has been subject
of intense studies. Potential applications of planar composites include local
near-field microscopy and spectroscopy. A number of previous studies have
relied on the quasistatic approximation and a direct analogy with localization
of electrons in disordered solids. Here I consider the localization problem
without the quasistatic approximation. I show that localization of polar
excitations is characterized by algebraic rather than by exponential spatial
confinement. This result is also valid in two and three dimensions. I also show
that the previously used localization criterion based on the gyration radius of
eigenmodes is inconsistent with both exponential and algebraic localization. An
alternative criterion based on the dipole participation number is proposed.
Numerical demonstration of a localization-delocalization transition is given.
Finally, it is shown that, contrary to the previous belief, localized modes can
be effectively coupled to running waves.Comment: 22 pages, 7 figures. Paper was revised and a more precise definition
of the participation number given, data for figures recalculated accordingly.
Accepted to J. Phys.: Cond. Mat
Lattice-switch Monte Carlo
We present a Monte Carlo method for the direct evaluation of the difference
between the free energies of two crystal structures. The method is built on a
lattice-switch transformation that maps a configuration of one structure onto a
candidate configuration of the other by `switching' one set of lattice vectors
for the other, while keeping the displacements with respect to the lattice
sites constant. The sampling of the displacement configurations is biased,
multicanonically, to favor paths leading to `gateway' arrangements for which
the Monte Carlo switch to the candidate configuration will be accepted. The
configurations of both structures can then be efficiently sampled in a single
process, and the difference between their free energies evaluated from their
measured probabilities. We explore and exploit the method in the context of
extensive studies of systems of hard spheres. We show that the efficiency of
the method is controlled by the extent to which the switch conserves correlated
microstructure. We also show how, microscopically, the procedure works: the
system finds gateway arrangements which fulfill the sampling bias
intelligently. We establish, with high precision, the differences between the
free energies of the two close packed structures (fcc and hcp) in both the
constant density and the constant pressure ensembles.Comment: 34 pages, 9 figures, RevTeX. To appear in Phys. Rev.
Scalable Neural Network Decoders for Higher Dimensional Quantum Codes
Machine learning has the potential to become an important tool in quantum
error correction as it allows the decoder to adapt to the error distribution of
a quantum chip. An additional motivation for using neural networks is the fact
that they can be evaluated by dedicated hardware which is very fast and
consumes little power. Machine learning has been previously applied to decode
the surface code. However, these approaches are not scalable as the training
has to be redone for every system size which becomes increasingly difficult. In
this work the existence of local decoders for higher dimensional codes leads us
to use a low-depth convolutional neural network to locally assign a likelihood
of error on each qubit. For noiseless syndrome measurements, numerical
simulations show that the decoder has a threshold of around when
applied to the 4D toric code. When the syndrome measurements are noisy, the
decoder performs better for larger code sizes when the error probability is
low. We also give theoretical and numerical analysis to show how a
convolutional neural network is different from the 1-nearest neighbor
algorithm, which is a baseline machine learning method
-MLE: A fast algorithm for learning statistical mixture models
We describe -MLE, a fast and efficient local search algorithm for learning
finite statistical mixtures of exponential families such as Gaussian mixture
models. Mixture models are traditionally learned using the
expectation-maximization (EM) soft clustering technique that monotonically
increases the incomplete (expected complete) likelihood. Given prescribed
mixture weights, the hard clustering -MLE algorithm iteratively assigns data
to the most likely weighted component and update the component models using
Maximum Likelihood Estimators (MLEs). Using the duality between exponential
families and Bregman divergences, we prove that the local convergence of the
complete likelihood of -MLE follows directly from the convergence of a dual
additively weighted Bregman hard clustering. The inner loop of -MLE can be
implemented using any -means heuristic like the celebrated Lloyd's batched
or Hartigan's greedy swap updates. We then show how to update the mixture
weights by minimizing a cross-entropy criterion that implies to update weights
by taking the relative proportion of cluster points, and reiterate the mixture
parameter update and mixture weight update processes until convergence. Hard EM
is interpreted as a special case of -MLE when both the component update and
the weight update are performed successively in the inner loop. To initialize
-MLE, we propose -MLE++, a careful initialization of -MLE guaranteeing
probabilistically a global bound on the best possible complete likelihood.Comment: 31 pages, Extend preliminary paper presented at IEEE ICASSP 201
- …