376 research outputs found
The Activation-Relaxation Technique : ART nouveau and kinetic ART
The evolution of many systems is dominated by rare activated events that occur on timescale ranging from nanoseconds to the hour or more. For such systems, simulations must leave aside the full thermal description to focus specifically on mechanisms that generate a configurational change. We present here the activation relaxation technique (ART), an open-ended saddle point search algorithm, and a series of recent improvements to ART nouveau and kinetic ART, an ART-based on-the-fly off-lattice self-learning kinetic Monte Carlo method
On composite systems of dilute and dense couplings
Composite systems, where couplings are of two types, a combination of strong
dilute and weak dense couplings of Ising spins, are examined through the
replica method. The dilute and dense parts are considered to have independent
canonical disordered or uniform bond distributions; mixing the models by
variation of a parameter alongside inverse temperature we
analyse the respective thermodynamic solutions. We describe the variation in
high temperature transitions as mixing occurs; in the vicinity of these
transitions we exactly analyse the competing effects of the dense and sparse
models. By using the replica symmetric ansatz and population dynamics we
described the low temperature behaviour of mixed systems.Comment: 35 pages, 9 figures, submitted to JPhys
Asymptotic Level Density of the Elastic Net Self-Organizing Feature Map
Whileas the Kohonen Self Organizing Map shows an asymptotic level density
following a power law with a magnification exponent 2/3, it would be desired to
have an exponent 1 in order to provide optimal mapping in the sense of
information theory. In this paper, we study analytically and numerically the
magnification behaviour of the Elastic Net algorithm as a model for
self-organizing feature maps. In contrast to the Kohonen map the Elastic Net
shows no power law, but for onedimensional maps nevertheless the density
follows an universal magnification law, i.e. depends on the local stimulus
density only and is independent on position and decouples from the stimulus
density at other positions.Comment: 8 pages, 10 figures. Link to publisher under
http://link.springer.de/link/service/series/0558/bibs/2415/24150939.ht
Composite CDMA - A statistical mechanics analysis
Code Division Multiple Access (CDMA) in which the spreading code assignment
to users contains a random element has recently become a cornerstone of CDMA
research. The random element in the construction is particular attractive as it
provides robustness and flexibility in utilising multi-access channels, whilst
not making significant sacrifices in terms of transmission power. Random codes
are generated from some ensemble, here we consider the possibility of combining
two standard paradigms, sparsely and densely spread codes, in a single
composite code ensemble. The composite code analysis includes a replica
symmetric calculation of performance in the large system limit, and
investigation of finite systems through a composite belief propagation
algorithm. A variety of codes are examined with a focus on the high
multi-access interference regime. In both the large size limit and finite
systems we demonstrate scenarios in which the composite code has typical
performance exceeding sparse and dense codes at equivalent signal to noise
ratio.Comment: 23 pages, 11 figures, Sigma Phi 2008 conference submission -
submitted to J.Stat.Mec
Simultaneous solution approaches for large optimization problems
AbstractIn this paper, efficient simultaneous strategies are presented for the optimization of practical problems involving PDE-models. In particular, reduced sequential quadratic programming methods for problems with only few influence variables and simultaneous quadratic programming iterations are discussed. As a result we obtain algorithms whose overall computational complexity is reduced considerably in comparison to a black-box approach
Shape Optimization by Constrained First-Order Least Mean Approximation
In this work, the problem of shape optimization, subject to PDE constraints,
is reformulated as an best approximation problem under divergence
constraints to the shape tensor introduced in Laurain and Sturm: ESAIM Math.
Model. Numer. Anal. 50 (2016). More precisely, the main result of this paper
states that the distance of the above approximation problem is equal to
the dual norm of the shape derivative considered as a functional on
(where ). This implies that for any given
shape, one can evaluate its distance from being a stationary one with respect
to the shape derivative by simply solving the associated -type least mean
approximation problem. Moreover, the Lagrange multiplier for the divergence
constraint turns out to be the shape deformation of steepest descent. This
provides a way, as an alternative to the approach by Deckelnick, Herbert and
Hinze: ESAIM Control Optim. Calc. Var. 28 (2022), for computing shape gradients
in for . The discretization of the
least mean approximation problem is done with (lowest-order) matrix-valued
Raviart-Thomas finite element spaces leading to piecewise constant
approximations of the shape deformation acting as Lagrange multiplier.
Admissible deformations in to be used in a shape gradient
iteration are reconstructed locally. Our computational results confirm that the
distance of the best approximation does indeed measure the distance of
the considered shape to optimality. Also confirmed by our computational tests
are the observations that choosing (much) larger than 2 (which means
that must be close to 1 in our best approximation problem) decreases the
chance of encountering mesh degeneracy during the shape gradient iteration.Comment: 20 pages, 8 figure
Discussion of "Geodesic Monte Carlo on Embedded Manifolds"
Contributed discussion and rejoinder to "Geodesic Monte Carlo on Embedded
Manifolds" (arXiv:1301.6064)Comment: Discussion of arXiv:1301.6064. To appear in the Scandinavian Journal
of Statistics. 18 page
Calculations of Excited Electronic States by Converging on Saddle Points Using Generalized Mode Following
Variational calculations of excited electronic states are carried out by
finding saddle points on the surface that describes how the energy of the
system varies as a function of the electronic degrees of freedom. This approach
has several advantages over commonly used methods especially in the context of
density functional calculations, as collapse to the ground state is avoided and
yet, the orbitals are variationally optimized for the excited state. This
optimization makes it possible to describe excitations with large charge
transfer where calculations based on ground state orbitals are problematic, as
in linear response time-dependent density functional theory. A generalized mode
following method is presented where an -order saddle point is
found by inverting the components of the gradient in the direction of the
eigenvectors of the lowest eigenvalues of the electronic Hessian matrix.
This approach has the distinct advantage of following a chosen excited state
through atomic configurations where the symmetry of the single determinant wave
function is broken, as demonstrated in calculations of potential energy curves
for nuclear motion in the ethylene and dihydrogen molecules. The method is
implemented using a generalized Davidson algorithm and an exponential
transformation for updating the orbitals within a generalized gradient
approximation of the energy functional. Convergence is found to be more robust
than for a direct optimization approach previously shown to outperform standard
self-consistent field approaches, as illustrated here for charge transfer
excitations in nitrobenzene and N-phenylpyrrole, involving calculations of
- and -order saddle points, respectively.
Finally, calculations of a diplatinum and silver complex are presented,
illustrating the applicability of the method to excited state energy curves of
large molecules.Comment: 57 pages, 12 figures, submitted to the Journal of Chemical Theory and
Computatio
The topography of multivariate normal mixtures
Multivariate normal mixtures provide a flexible method of fitting
high-dimensional data. It is shown that their topography, in the sense of their
key features as a density, can be analyzed rigorously in lower dimensions by
use of a ridgeline manifold that contains all critical points, as well as the
ridges of the density. A plot of the elevations on the ridgeline shows the key
features of the mixed density. In addition, by use of the ridgeline, we uncover
a function that determines the number of modes of the mixed density when there
are two components being mixed. A followup analysis then gives a curvature
function that can be used to prove a set of modality theorems.Comment: Published at http://dx.doi.org/10.1214/009053605000000417 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …