63,851 research outputs found
Inverse Optimization of Convex Risk Functions
The theory of convex risk functions has now been well established as the
basis for identifying the families of risk functions that should be used in
risk averse optimization problems. Despite its theoretical appeal, the
implementation of a convex risk function remains difficult, as there is little
guidance regarding how a convex risk function should be chosen so that it also
well represents one's own risk preferences. In this paper, we address this
issue through the lens of inverse optimization. Specifically, given solution
data from some (forward) risk-averse optimization problems we develop an
inverse optimization framework that generates a risk function that renders the
solutions optimal for the forward problems. The framework incorporates the
well-known properties of convex risk functions, namely, monotonicity,
convexity, translation invariance, and law invariance, as the general
information about candidate risk functions, and also the feedbacks from
individuals, which include an initial estimate of the risk function and
pairwise comparisons among random losses, as the more specific information. Our
framework is particularly novel in that unlike classical inverse optimization,
no parametric assumption is made about the risk function, i.e. it is
non-parametric. We show how the resulting inverse optimization problems can be
reformulated as convex programs and are polynomially solvable if the
corresponding forward problems are polynomially solvable. We illustrate the
imputed risk functions in a portfolio selection problem and demonstrate their
practical value using real-life data
Variance Reduction for Faster Non-Convex Optimization
We consider the fundamental problem in non-convex optimization of efficiently
reaching a stationary point. In contrast to the convex case, in the long
history of this basic problem, the only known theoretical results on
first-order non-convex optimization remain to be full gradient descent that
converges in iterations for smooth objectives, and
stochastic gradient descent that converges in iterations
for objectives that are sum of smooth functions.
We provide the first improvement in this line of research. Our result is
based on the variance reduction trick recently introduced to convex
optimization, as well as a brand new analysis of variance reduction that is
suitable for non-convex optimization. For objectives that are sum of smooth
functions, our first-order minibatch stochastic method converges with an
rate, and is faster than full gradient descent by
.
We demonstrate the effectiveness of our methods on empirical risk
minimizations with non-convex loss functions and training neural nets.Comment: polished writin
Optimization of Convex Risk Functions
We consider optimization problems involving convex risk functions. By employing techniques of convex analysis and optimization theory in vector spaces of measurable functions we develop new representation theorems for risk models, and optimality and duality theory for problems involving risk functions
Set optimization - a rather short introduction
Recent developments in set optimization are surveyed and extended including
various set relations as well as fundamental constructions of a convex analysis
for set- and vector-valued functions, and duality for set optimization
problems. Extensive sections with bibliographical comments summarize the state
of the art. Applications to vector optimization and financial risk measures are
discussed along with algorithmic approaches to set optimization problems
Differentially Private Empirical Risk Minimization with Sparsity-Inducing Norms
Differential privacy is concerned about the prediction quality while
measuring the privacy impact on individuals whose information is contained in
the data. We consider differentially private risk minimization problems with
regularizers that induce structured sparsity. These regularizers are known to
be convex but they are often non-differentiable. We analyze the standard
differentially private algorithms, such as output perturbation, Frank-Wolfe and
objective perturbation. Output perturbation is a differentially private
algorithm that is known to perform well for minimizing risks that are strongly
convex. Previous works have derived excess risk bounds that are independent of
the dimensionality. In this paper, we assume a particular class of convex but
non-smooth regularizers that induce structured sparsity and loss functions for
generalized linear models. We also consider differentially private Frank-Wolfe
algorithms to optimize the dual of the risk minimization problem. We derive
excess risk bounds for both these algorithms. Both the bounds depend on the
Gaussian width of the unit ball of the dual norm. We also show that objective
perturbation of the risk minimization problems is equivalent to the output
perturbation of a dual optimization problem. This is the first work that
analyzes the dual optimization problems of risk minimization problems in the
context of differential privacy
- …