1,453 research outputs found

    A Framework for Population-Based Stochastic Optimization on Abstract Riemannian Manifolds

    Full text link
    We present Extended Riemannian Stochastic Derivative-Free Optimization (Extended RSDFO), a novel population-based stochastic optimization algorithm on Riemannian manifolds that addresses the locality and implicit assumptions of manifold optimization in the literature. We begin by investigating the Information Geometrical structure of statistical model over Riemannian manifolds. This establishes a geometrical framework of Extended RSDFO using both the statistical geometry of the decision space and the Riemannian geometry of the search space. We construct locally inherited probability distribution via an orientation-preserving diffeomorphic bundle morphism, and then extend the information geometrical structure to mixture densities over totally bounded subsets of manifolds. The former relates the information geometry of the decision space and the local point estimations on the search space manifold. The latter overcomes the locality of parametric probability distributions on Riemannian manifolds. We then construct Extended RSDFO and study its structure and properties from a geometrical perspective. We show that Extended RSDFO's expected fitness improves monotonically and it's global eventual convergence in finitely many steps on connected compact Riemannian manifolds. Extended RSDFO is compared to state-of-the-art manifold optimization algorithms on multi-modal optimization problems over a variety of manifolds. In particular, we perform a novel synthetic experiment on Jacob's ladder to motivate and necessitate manifold optimization. Jacob's ladder is a non-compact manifold of countably infinite genus, which cannot be expressed as polynomial constraints and does not have a global representation in an ambient Euclidean space. Optimization problems on Jacob's ladder thus cannot be addressed by traditional (constraint) optimization methods on Euclidean spaces.Comment: The present abstract is slightly altered from the PDF version due to the limitation "The abstract field cannot be longer than 1,920 characters

    Manifold Optimization Over the Set of Doubly Stochastic Matrices: A Second-Order Geometry

    Get PDF
    Convex optimization is a well-established research area with applications in almost all fields. Over the decades, multiple approaches have been proposed to solve convex programs. The development of interior-point methods allowed solving a more general set of convex programs known as semi-definite programs and second-order cone programs. However, it has been established that these methods are excessively slow for high dimensions, i.e., they suffer from the curse of dimensionality. On the other hand, optimization algorithms on manifold have shown great ability in finding solutions to nonconvex problems in reasonable time. This paper is interested in solving a subset of convex optimization using a different approach. The main idea behind Riemannian optimization is to view the constrained optimization problem as an unconstrained one over a restricted search space. The paper introduces three manifolds to solve convex programs under particular box constraints. The manifolds, called the doubly stochastic, symmetric and the definite multinomial manifolds, generalize the simplex also known as the multinomial manifold. The proposed manifolds and algorithms are well-adapted to solving convex programs in which the variable of interest is a multidimensional probability distribution function. Theoretical analysis and simulation results testify the efficiency of the proposed method over state of the art methods. In particular, they reveal that the proposed framework outperforms conventional generic and specialized solvers, especially in high dimensions

    Linear PDEs and eigenvalue problems corresponding to ergodic stochastic optimization problems on compact manifolds

    Full text link
    We consider long term average or `ergodic' optimal control poblems with a special structure: Control is exerted in all directions and the control costs are proportional to the square of the norm of the control field with respect to the metric induced by the noise. The long term stochastic dynamics on the manifold will be completely characterized by the long term density ρ\rho and the long term current density JJ. As such, control problems may be reformulated as variational problems over ρ\rho and JJ. We discuss several optimization problems: the problem in which both ρ\rho and JJ are varied freely, the problem in which ρ\rho is fixed and the one in which JJ is fixed. These problems lead to different kinds of operator problems: linear PDEs in the first two cases and a nonlinear PDE in the latter case. These results are obtained through through variational principle using infinite dimensional Lagrange multipliers. In the case where the initial dynamics are reversible we obtain the result that the optimally controlled diffusion is also symmetrizable. The particular case of constraining the dynamics to be reversible of the optimally controlled process leads to a linear eigenvalue problem for the square root of the density process
    corecore