2,006,803 research outputs found
Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
In this paper, we present a new stochastic algorithm, namely the stochastic
block mirror descent (SBMD) method for solving large-scale nonsmooth and
stochastic optimization problems. The basic idea of this algorithm is to
incorporate the block-coordinate decomposition and an incremental block
averaging scheme into the classic (stochastic) mirror-descent method, in order
to significantly reduce the cost per iteration of the latter algorithm. We
establish the rate of convergence of the SBMD method along with its associated
large-deviation results for solving general nonsmooth and stochastic
optimization problems. We also introduce different variants of this method and
establish their rate of convergence for solving strongly convex, smooth, and
composite optimization problems, as well as certain nonconvex optimization
problems. To the best of our knowledge, all these developments related to the
SBMD methods are new in the stochastic optimization literature. Moreover, some
of our results also seem to be new for block coordinate descent methods for
deterministic optimization
Is demagnetization an efficient optimization method?
Demagnetization, commonly employed to study ferromagnets, has been proposed
as the basis for an optimization tool, a method to find the ground state of a
disordered system. Here we present a detailed comparison between the ground
state and the demagnetized state in the random field Ising model, combing exact
results in and numerical solutions in . We show that there are
important differences between the two states that persist in the thermodynamic
limit and thus conclude that AC demagnetization is not an efficient
optimization method.Comment: 2 pages, 1 figur
Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets
The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth
optimization has regained much interest in recent years in the context of large
scale optimization and machine learning. A key advantage of the method is that
it avoids projections - the computational bottleneck in many applications -
replacing it by a linear optimization step. Despite this advantage, the known
convergence rates of the FW method fall behind standard first order methods for
most settings of interest. It is an active line of research to derive faster
linear optimization-based algorithms for various settings of convex
optimization.
In this paper we consider the special case of optimization over strongly
convex sets, for which we prove that the vanila FW method converges at a rate
of . This gives a quadratic improvement in convergence rate
compared to the general case, in which convergence is of the order
, and known to be tight. We show that various balls induced by
norms, Schatten norms and group norms are strongly convex on one hand
and on the other hand, linear optimization over these sets is straightforward
and admits a closed-form solution. We further show how several previous
fast-rate results for the FW method follow easily from our analysis
Path optimization method for the sign problem
We propose a path optimization method (POM) to evade the sign problem in the
Monte-Carlo calculations for complex actions. Among many approaches to the sign
problem, the Lefschetz-thimble path-integral method and the complex Langevin
method are promising and extensively discussed. In these methods, real field
variables are complexified and the integration manifold is determined by the
flow equations or stochastically sampled. When we have singular points of the
action or multiple critical points near the original integral surface, however,
we have a risk to encounter the residual and global sign problems or the
singular drift term problem. One of the ways to avoid the singular points is to
optimize the integration path which is designed not to hit the singular points
of the Boltzmann weight. By specifying the one-dimensional integration-path as
and by optimizing to enhance the average
phase factor, we demonstrate that we can avoid the sign problem in a
one-variable toy model for which the complex Langevin method is found to fail.
In this proceedings, we propose POM and discuss how we can avoid the sign
problem in a toy model. We also discuss the possibility to utilize the neural
network to optimize the path.Comment: Talk given at the 35th International Symposium on Lattice Field
Theory, 18-24 June 2017, Granada, Spain. 8 pages, 4 figures (references are
updated in v2
Leitmann's direct method for fractional optimization problems
Based on a method introduced by Leitmann [Internat. J. Non-Linear Mech. {\bf
2} (1967), 55--59], we exhibit exact solutions for some fractional optimization
problems of the calculus of variations and optimal control.Comment: Submitted June 16, 2009 and accepted March 15, 2010 for publication
in Applied Mathematics and Computation
The q-gradient method for global optimization
The q-gradient is an extension of the classical gradient vector based on the
concept of Jackson's derivative. Here we introduce a preliminary version of the
q-gradient method for unconstrained global optimization. The main idea behind
our approach is the use of the negative of the q-gradient of the objective
function as the search direction. In this sense, the method here proposed is a
generalization of the well-known steepest descent method. The use of Jackson's
derivative has shown to be an effective mechanism for escaping from local
minima. The q-gradient method is complemented with strategies to generate the
parameter q and to compute the step length in a way that the search process
gradually shifts from global in the beginning to almost local search in the
end. For testing this new approach, we considered six commonly used test
functions and compared our results with three Genetic Algorithms (GAs)
considered effective in optimizing multidimensional unimodal and multimodal
functions. For the multimodal test functions, the q-gradient method
outperformed the GAs, reaching the minimum with a better accuracy and with less
function evaluations.Comment: 12 pages, 1 figur
- …
