11 research outputs found
Iteration-Complexity of the Subgradient Method on Riemannian Manifolds with Lower Bounded Curvature
The subgradient method for convex optimization problems on complete
Riemannian manifolds with lower bounded sectional curvature is analyzed in this
paper. Iteration-complexity bounds of the subgradient method with exogenous
step-size and Polyak's step-size are stablished, completing and improving
recent results on the subject
Enlargement of Monotone Vector Fields and an Inexact Proximal Point Method for Variational Inequalities in Hadamard Manifolds
In this paper an inexact proximal point method for variational inequalities
in Hadamard manifolds is introduced and studied its convergence properties. The
main tool used for presenting the method is the concept of enlargement of
monotone vector fields, which generalizes the concept of enlargement of
monotone operators from the linear setting to the Riemannian context. As an
application, an inexact proximal point method for constrained optimization
problems is obtained.Comment: 14 page
Convergence Analysis of Gradient Algorithms on Riemannian Manifolds Without Curvature Constraints and Application to Riemannian Mass
We study the convergence issue for the gradient algorithm (employing general
step sizes) for optimization problems on general Riemannian manifolds (without
curvature constraints). Under the assumption of the local
convexity/quasi-convexity (resp. weak sharp minima), local/global convergence
(resp. linear convergence) results are established. As an application, the
linear convergence properties of the gradient algorithm employing the constant
step sizes and the Armijo step sizes for finding the Riemannian
() centers of mass are explored, respectively, which in
particular extend and/or improve the corresponding results in
\cite{Afsari2013}.Comment: 31 page
Subgradient Descent Learns Orthogonal Dictionaries
This paper concerns dictionary learning, i.e., sparse coding, a fundamental
representation learning problem. We show that a subgradient descent algorithm,
with random initialization, can provably recover orthogonal dictionaries on a
natural nonsmooth, nonconvex minimization formulation of the problem,
under mild statistical assumptions on the data. This is in contrast to previous
provable methods that require either expensive computation or delicate
initialization schemes. Our analysis develops several tools for characterizing
landscapes of nonsmooth functions, which might be of independent interest for
provable training of deep networks with nonsmooth activations (e.g., ReLU),
among numerous other applications. Preliminary experiments corroborate our
analysis and show that our algorithm works well empirically in recovering
orthogonal dictionaries
Iteration-complexity and asymptotic analysis of steepest descent method for multiobjective optimization on Riemannian manifolds
The steepest descent method for multiobjective optimization on Riemannian
manifolds with lower bounded sectional curvature is analyzed in this paper. The
aim of the paper is twofold. Firstly, an asymptotic analysis of the method is
presented with three different finite procedures for determining the stepsize,
namely, Lipschitz stepsize, adaptive stepsize and Armijo-type stepsize. The
second aim is to present, by assuming that the Jacobian of the objective
function is componentwise Lipschitz continuous, iteration-complexity bounds for
the method with these three stepsizes strategies. In addition, some examples
are presented to emphasize the importance of working in this new context.
Numerical experiments are provided to illustrate the effectiveness of the
method in this new setting and certify the obtained theoretical results.Comment: 27 pages, 4 figure
An analysis of the superiorization method via the principle of concentration of measure
The superiorization methodology is intended to work with input data of
constrained minimization problems, i.e., a target function and a constraints
set. However, it is based on an antipodal way of thinking to the thinking that
leads constrained minimization methods. Instead of adapting unconstrained
minimization algorithms to handling constraints, it adapts feasibility-seeking
algorithms to reduce (not necessarily minimize) target function values. This is
done while retaining the feasibility-seeking nature of the algorithm and
without paying a high computational price. A guarantee that the local target
function reduction steps properly accumulate to a global target function value
reduction is still missing in spite of an ever-growing body of publications
that supply evidence of the success of the superiorization method in various
problems. We propose an analysis based on the principle of concentration of
measure that attempts to alleviate the guarantee question of the
superiorization method.Comment: 38 page
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
Substantial progress has been made recently on developing provably accurate
and efficient algorithms for low-rank matrix factorization via nonconvex
optimization. While conventional wisdom often takes a dim view of nonconvex
optimization algorithms due to their susceptibility to spurious local minima,
simple iterative methods such as gradient descent have been remarkably
successful in practice. The theoretical footings, however, had been largely
lacking until recently.
In this tutorial-style overview, we highlight the important role of
statistical models in enabling efficient nonconvex optimization with
performance guarantees. We review two contrasting approaches: (1) two-stage
algorithms, which consist of a tailored initialization step followed by
successive refinement; and (2) global landscape analysis and
initialization-free algorithms. Several canonical matrix factorization problems
are discussed, including but not limited to matrix sensing, phase retrieval,
matrix completion, blind deconvolution, robust principal component analysis,
phase synchronization, and joint alignment. Special care is taken to illustrate
the key technical insights underlying their analyses. This article serves as a
testament that the integrated consideration of optimization and statistics
leads to fruitful research findings.Comment: Invited overview articl
Variational Methods and Numerical Algorithms for Geometry Processing
In this work we address the problem of shape partitioning which enables the decomposition of an arbitrary topology object into smaller and more manageable pieces called partitions. Several applications in Computer Aided Design (CAD), Computer Aided Manufactury (CAM) and Finite Element Analysis (FEA) rely on object partitioning that provides a high level insight of the data useful for further processing. In particular, we are interested in 2-manifold partitioning, since the boundaries of tangible physical objects can be mathematically defined by two-dimensional manifolds embedded into three-dimensional Euclidean space. To that aim, a preliminary shape analysis is performed based on shape characterizing scalar/vector functions defined on a closed Riemannian 2-manifold. The detected shape features are used to drive the partitioning process into two directions – a human-based partitioning and a thickness-based partitioning. In particular, we focus on the Shape Diameter Function that recovers volumetric information from the surface thus providing a natural link between the object’s volume and its boundary, we consider the spectral decomposition of suitably-defined affinity matrices which provides multi-dimensional spectral coordinates of the object’s vertices, and we introduce a novel basis of sparse and localized quasi-eigenfunctions of the Laplace-Beltrami operator called Lp Compressed Manifold Modes.
The partitioning problem, which can be considered as a particular inverse problem, is formulated as a variational regularization problem whose solution provides the so-called piecewise constant/smooth partitioning function. The functional to be minimized consists of a fidelity term to a given data set and a regularization term which promotes sparsity, such as for example, Lp norm with p ∈ (0, 1) and other parameterized, non-convex penalty
functions with positive parameter, which controls the degree of non-convexity.
The proposed partitioning variational models, inspired on the well-known Mumford Shah models for recovering piecewise smooth/constant functions, incorporate a non-convex regularizer for minimizing the boundary lengths. The derived non-convex non-smooth optimization problems are solved by efficient numerical algorithms based on Proximal Forward-Backward Splitting and Alternating Directions Method of Multipliers strategies, also employing Convex Non-Convex approaches.
Finally, we investigate the application of surface partitioning to patch-based surface quadrangulation. To that aim the 2-manifold is first partitioned into zero-genus patches that capture the object’s arbitrary topology, then for each patch a quad-based minimal surface is created and evolved by a Lagrangian-based PDE evolution model to the original shape to obtain the final semi-regular quad mesh. The evolution is supervised by asymptotically area-uniform tangential redistribution for the quads
Mathematical and Data-driven Pattern Representation with Applications in Image Processing, Computer Graphics, and Infinite Dimensional Dynamical Data Mining
Patterns represent the spatial or temporal regularities intrinsic to various phenomena in nature, society, art, and science. From rigid ones with well-defined generative rules to flexible ones implied by unstructured data, patterns can be assigned to a spectrum. On one extreme, patterns are completely described by algebraic systems where each individual pattern is obtained by repeatedly applying simple operations on primitive elements. On the other extreme, patterns are perceived as visual or frequency regularities without any prior knowledge of the underlying mechanisms. In this thesis, we aim at demonstrating some mathematical techniques for representing patterns traversing the aforementioned spectrum, which leads to qualitative analysis of the patterns' properties and quantitative prediction of the modeled behaviors from various perspectives. We investigate lattice patterns from material science, shape patterns from computer graphics, submanifold patterns encountered in point cloud processing, color perception patterns applied in underwater image processing, dynamic patterns from spatial-temporal data, and low-rank patterns exploited in medical image reconstruction. For different patterns and based on their dependence on structured or unstructured data, we present suitable mathematical representations using techniques ranging from group theory to deep neural networks.Ph.D