2,308 research outputs found
OSQP: An Operator Splitting Solver for Quadratic Programs
We present a general-purpose solver for convex quadratic programs based on
the alternating direction method of multipliers, employing a novel operator
splitting technique that requires the solution of a quasi-definite linear
system with the same coefficient matrix at almost every iteration. Our
algorithm is very robust, placing no requirements on the problem data such as
positive definiteness of the objective function or linear independence of the
constraint functions. It can be configured to be division-free once an initial
matrix factorization is carried out, making it suitable for real-time
applications in embedded systems. In addition, our technique is the first
operator splitting method for quadratic programs able to reliably detect primal
and dual infeasible problems from the algorithm iterates. The method also
supports factorization caching and warm starting, making it particularly
efficient when solving parametrized problems arising in finance, control, and
machine learning. Our open-source C implementation OSQP has a small footprint,
is library-free, and has been extensively tested on many problem instances from
a wide variety of application areas. It is typically ten times faster than
competing interior-point methods, and sometimes much more when factorization
caching or warm start is used. OSQP has already shown a large impact with tens
of thousands of users both in academia and in large corporations
Constrained Quadratic Risk Minimization via Forward and Backward Stochastic Differential Equations
In this paper we study a continuous-time stochastic linear quadratic control
problem arising from mathematical finance. We model the asset dynamics with
random market coefficients and portfolio strategies with convex constraints.
Following the convex duality approach, we show that the necessary and
sufficient optimality conditions for both the primal and dual problems can be
written in terms of processes satisfying a system of FBSDEs together with other
conditions. We characterise explicitly the optimal wealth and portfolio
processes as functions of adjoint processes from the dual FBSDEs in a dynamic
fashion and vice versa. We apply the results to solve quadratic risk
minimization problems with cone-constraints and derive the explicit
representations of solutions to the extended stochastic Riccati equations for
such problems.Comment: 22 page
Multistage Portfolio Optimization: A Duality Result in Conic Market Models
We prove a general duality result for multi-stage portfolio optimization
problems in markets with proportional transaction costs. The financial market
is described by Kabanov's model of foreign exchange markets over a finite
probability space and finite-horizon discrete time steps. This framework allows
us to compare vector-valued portfolios under a partial ordering, so that our
model does not require liquidation into some numeraire at terminal time.
We embed the vector-valued portfolio problem into the set-optimization
framework, and generate a problem dual to portfolio optimization. Using recent
results in the development of set optimization, we then show that a strong
duality relationship holds between the problems
Linear vector optimization and European option pricing under proportional transaction costs
A method for pricing and superhedging European options under proportional
transaction costs based on linear vector optimisation and geometric duality
developed by Lohne & Rudloff (2014) is compared to a special case of the
algorithms for American type derivatives due to Roux & Zastawniak (2014). An
equivalence between these two approaches is established by means of a general
result linking the support function of the upper image of a linear vector
optimisation problem with the lower image of the dual linear optimisation
problem
An ADMM Algorithm for a Class of Total Variation Regularized Estimation Problems
We present an alternating augmented Lagrangian method for convex optimization
problems where the cost function is the sum of two terms, one that is separable
in the variable blocks, and a second that is separable in the difference
between consecutive variable blocks. Examples of such problems include Fused
Lasso estimation, total variation denoising, and multi-period portfolio
optimization with transaction costs. In each iteration of our method, the first
step involves separately optimizing over each variable block, which can be
carried out in parallel. The second step is not separable in the variables, but
can be carried out very efficiently. We apply the algorithm to segmentation of
data based on changes inmean (l_1 mean filtering) or changes in variance (l_1
variance filtering). In a numerical example, we show that our implementation is
around 10000 times faster compared with the generic optimization solver SDPT3
Parameter Selection and Pre-Conditioning for a Graph Form Solver
In a recent paper, Parikh and Boyd describe a method for solving a convex
optimization problem, where each iteration involves evaluating a proximal
operator and projection onto a subspace. In this paper we address the critical
practical issues of how to select the proximal parameter in each iteration, and
how to scale the original problem variables, so as the achieve reliable
practical performance. The resulting method has been implemented as an
open-source software package called POGS (Proximal Graph Solver), that targets
multi-core and GPU-based systems, and has been tested on a wide variety of
practical problems. Numerical results show that POGS can solve very large
problems (with, say, more than a billion coefficients in the data), to modest
accuracy in a few tens of seconds. As just one example, a radiation treatment
planning problem with around 100 million coefficients in the data can be solved
in a few seconds, as compared to around one hour with an interior-point method.Comment: 28 pages, 1 figure, 1 open source implementatio
- …