639 research outputs found
Asymptotic behavior of the finite-size magnetization as a function of the speed of approach to criticality
The main focus of this paper is to determine whether the thermodynamic
magnetization is a physically relevant estimator of the finite-size
magnetization. This is done by comparing the asymptotic behaviors of these two
quantities along parameter sequences converging to either a second-order point
or the tricritical point in the mean-field Blume--Capel model. We show that the
thermodynamic magnetization and the finite-size magnetization are asymptotic
when the parameter governing the speed at which the sequence
approaches criticality is below a certain threshold . However, when
exceeds , the thermodynamic magnetization converges to 0
much faster than the finite-size magnetization. The asymptotic behavior of the
finite-size magnetization is proved via a moderate deviation principle when
.
To the best of our knowledge, our results are the first rigorous confirmation
of the statistical mechanical theory of finite-size scaling for a mean-field
model.Comment: Published in at http://dx.doi.org/10.1214/10-AAP679 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Ginzburg-Landau Polynomials and the Asymptotic Behavior of the Magnetization Near Critical and Tricritical Points
For the mean-field version of an important lattice-spin model due to Blume
and Capel, we prove unexpected connections among the asymptotic behavior of the
magnetization, the structure of the phase transitions, and a class of
polynomials that we call the Ginzburg-Landau polynomials. The model depends on
the parameters n, beta, and K, which represent, respectively, the number of
spins, the inverse temperature, and the interaction strength. Our main focus is
on the asymptotic behavior of the magnetization m(beta_n,K_n) for appropriate
sequences (beta_n,K_n) that converge to a second-order point or to the
tricritical point of the model and that lie inside various subsets of the
phase-coexistence region. The main result states that as (beta_n,K_n) converges
to one of these points (beta,K), m(beta_n,K_n) ~ c |beta - beta_n|^gamma --> 0.
In this formula gamma is a positive constant, and c is the unique positive,
global minimum point of a certain polynomial g that we call the Ginzburg-Landau
polynomial. This polynomial arises as a limit of appropriately scaled
free-energy functionals, the global minimum points of which define the
phase-transition structure of the model. For each sequence (beta_n,K_n) under
study, the structure of the global minimum points of the associated
Ginzburg-Landau polynomial mirrors the structure of the global minimum points
of the free-energy functional in the region through which (beta_n,K_n) passes
and thus reflects the phase-transition structure of the model in that region.
The properties of the Ginzburg-Landau polynomials make rigorous the predictions
of the Ginzburg-Landau phenomenology of critical phenomena, and the asymptotic
formula for m(beta_n,K_n) makes rigorous the heuristic scaling theory of the
tricritical point.Comment: 70 pages, 8 figure
Zolotarev Quadrature Rules and Load Balancing for the FEAST Eigensolver
The FEAST method for solving large sparse eigenproblems is equivalent to
subspace iteration with an approximate spectral projector and implicit
orthogonalization. This relation allows to characterize the convergence of this
method in terms of the error of a certain rational approximant to an indicator
function. We propose improved rational approximants leading to FEAST variants
with faster convergence, in particular, when using rational approximants based
on the work of Zolotarev. Numerical experiments demonstrate the possible
computational savings especially for pencils whose eigenvalues are not well
separated and when the dimension of the search space is only slightly larger
than the number of wanted eigenvalues. The new approach improves both
convergence robustness and load balancing when FEAST runs on multiple search
intervals in parallel.Comment: 22 pages, 8 figure
Favour: FAst Variance Operator for Uncertainty Rating
Bayesian Neural Networks (BNN) have emerged as a crucial approach for
interpreting ML predictions. By sampling from the posterior distribution, data
scientists may estimate the uncertainty of an inference. Unfortunately many
inference samples are often needed, the overhead of which greatly hinder BNN's
wide adoption. To mitigate this, previous work proposed propagating the first
and second moments of the posterior directly through the network. However, on
its own this method is even slower than sampling, so the propagated variance
needs to be approximated such as assuming independence between neural nodes.
The resulting trade-off between quality and inference time did not match even
plain Monte Carlo sampling.
Our contribution is a more principled variance propagation framework based on
"spiked covariance matrices", which smoothly interpolates between quality and
inference time. This is made possible by a new fast algorithm for updating a
diagonal-plus-low-rank matrix approximation under various operations. We tested
our algorithm against sampling based MC Dropout and Variational Inference on a
number of downstream uncertainty themed tasks, such as calibration and
out-of-distribution testing. We find that Favour is as fast as performing 2-3
inference samples, while matching the performance of 10-100 samples.
In summary, this work enables the use of BNN in the realm of performance
critical tasks where they have previously been out of reach
- …