19 research outputs found

    Numerical Optimisation Problems in Finance

    Get PDF
    This thesis consists of four projects regarding numerical optimisation and financial derivative pricing. The first project deals with the calibration of the Heston stochastic volatility model. A method using the Levenberg-Marquardt algorithm with the analytical gradient is developed. It is so far the fastest Heston model calibrator and meets the speed requirement of practical trading. In the second project, a triply-nested iterative method for the implementation of interior-point methods for linear programs is proposed. It is the first time that an interior-point method entirely based on iterative solvers succeeds in solving a fairly large number of linear programming instances from benchmark libraries under the standard stopping criteria. The third project extends the Black-Scholes valuation to a complex volatility parameter and presents its singularities at zero and infinity. Fractals that describe the chaotic nature of the Newton-Raphson calculation of the implied volatility are shown for different moneyness values. Among other things, these fractals visualise dramatically the effect of an existing modification for improving the stability and convergence of the search. The project studies scientifically an interesting problem widespread in the financial industry, while revealing artistic values stemming from mathematics. The fourth project investigates the consistency of a class of stochastic volatility models under spot rate inversion, and hence their suitability in the foreign exchange market. The general formula of the model parameters for the inversion rate is given, which provides basis for further investigation. The result is further extended to the affine stochastic volatility model. The Heston model, among the other members in the stochastic volatility family, is the only one that we found to be consistent under the spot inversion. The conclusion on the Heston model verifies the arbitrage opportunity in the variance swap

    Efficient interior point algorithms for large scale convex optimization problems

    Get PDF
    Interior point methods (IPMs) are among the most widely used algorithms for convex optimization problems. They are applicable to a wide range of problems, including linear, quadratic, nonlinear, conic and semidefinite programming problems, requiring a polynomial number of iterations to find an accurate approximation of the primal-dual solution. The formidable convergence properties of IPMs come with a fundamental drawback: the numerical linear algebra involved becomes progressively more and more challenging as the IPM converges towards optimality. In particular, solving the linear systems to find the Newton directions requires most of the computational effort of an IPM. Proposed remedies to alleviate this phenomenon include regularization techniques, predictor-corrector schemes, purposely developed preconditioners, low-rank update strategies, to mention a few. For problems of very large scale, this unpleasant characteristic of IPMs becomes a more and more problematic feature, since any technique used must be efficient and scalable in order to maintain acceptable computational requirements. In this Thesis, we deal with convex linear and quadratic problems of large “dimension”: we use this term in a broader sense than just a synonym for “size” of the problem. The instances considered can be either problems with a large number of variables and/or constraints but with a sparse structure, or problems with a moderate number of variables and/or constraints but with a dense structure. Both these type of problems require very efficient strategies to be used during the algorithm, even though the corresponding difficulties arise for different reasons. The first application that we consider deals with a moderate size quadratic problem where the quadratic term is 100% dense; this problem arises from X-ray tomographic imaging reconstruction, in particular with the goal of separating the distribution of two materials present in the observed sample. A novel non-convex regularizer is introduced for this purpose; convexity of the overall problem is maintained by careful choice of the parameters. We derive a specialized interior point method for this problem and an appropriate preconditioner for the normal equations linear system, to be used without ever forming the fully dense matrices involved. The next major contribution is related to the issue of efficiently computing the Newton direction during IPMs. When an iterative method is applied to solve the linear equation system in IPMs, the attention is usually placed on accelerating their convergence by designing appropriate preconditioners, but the linear solver is applied as a black box with a standard termination criterion which asks for a sufficient reduction of the residual in the linear system. Such an approach often leads to an unnecessary “over-solving” of linear equations. We propose new indicators for the early termination of the inner iterations and test them on a set of large scale quadratic optimization problems. Evidence gathered from these computational experiments shows that the new technique delivers significant improvements in terms of inner (linear) iterations and those translate into significant savings of the IPM solution time. The last application considered is discrete optimal transport (OT) problems; these kind of problems give rise to very large linear programs with highly structured matrices. Solutions of such problems are expected to be sparse, that is only a small subset of entries in the optimal solution is expected to be nonzero. We derive an IPM for the standard OT formulation, which exploits a column-generation-like technique to force all intermediate iterates to be as sparse as possible. We prove theoretical results about the sparsity pattern of the optimal solution and we propose to mix iterative and direct linear solvers in an efficient way, to keep computational time and memory requirement as low as possible. We compare the proposed method with two state-of-the-art solvers and show that it can compete with the best network optimization tools in terms of computational time and memory usage. We perform experiments with problems reaching more than four billion variables and demonstrate the robustness of the proposed method. We consider also the optimal transport problem on sparse graphs and present a primal-dual regularized IPM to solve it. We prove that the introduction of the regularization allows us to use sparsified versions of the normal equations system to inexpensively generate inexact IPM directions. The proposed method is shown to have polynomial complexity and to outperform a very efficient network simplex implementation, for problems with up to 50 million variables

    New solution approaches for the quadratic assignment problem

    Get PDF
    MSc., Faculty of Science, University of the Witwatersrand, 2011A vast array of important practical problems, in many di erent elds, can be modelled and solved as quadratic assignment problems (QAP). This includes problems such as university campus layout, forest management, assignment of runners in a relay team, parallel and distributed computing, etc. The QAP is a di cult combinatorial optimization problem and solving QAP instances of size greater than 22 within a reasonable amount of time is still challenging. In this dissertation, we propose two new solution approaches to the QAP, namely, a Branch-and-Bound method and a discrete dynamic convexized method. These two methods use the standard quadratic integer programming formulation of the QAP. We also present a lower bounding technique for the QAP based on an equivalent separable convex quadratic formulation of the QAP. We nally develop two di erent new techniques for nding initial strictly feasible points for the interior point method used in the Branch-and-Bound method. Numerical results are presented showing the robustness of both methods

    Regularized interior point methods for convex programming

    Get PDF
    Interior point methods (IPMs) constitute one of the most important classes of optimization methods, due to their unparalleled robustness, as well as their generality. It is well known that a very large class of convex optimization problems can be solved by means of IPMs, in a polynomial number of iterations. As a result, IPMs are being used to solve problems arising in a plethora of fields, ranging from physics, engineering, and mathematics, to the social sciences, to name just a few. Nevertheless, there remain certain numerical issues that have not yet been addressed. More specifically, the main drawback of IPMs is that the linear algebra task involved is inherently ill-conditioned. At every iteration of the method, one has to solve a (possibly large-scale) linear system of equations (also known as the Newton system), the conditioning of which deteriorates as the IPM converges to an optimal solution. If these linear systems are of very large dimension, prohibiting the use of direct factorization, then iterative schemes may have to be employed. Such schemes are significantly affected by the inherent ill-conditioning within IPMs. One common approach for improving the aforementioned numerical issues, is to employ regularized IPM variants. Such methods tend to be more robust and numerically stable in practice. Over the last two decades, the theory behind regularization has been significantly advanced. In particular, it is well known that regularized IPM variants can be interpreted as hybrid approaches combining IPMs with the proximal point method. However, it remained unknown whether regularized IPMs retain the polynomial complexity of their non-regularized counterparts. Furthermore, the very important issue of tuning the regularization parameters appropriately, which is also crucial in augmented Lagrangian methods, was not addressed. In this thesis, we focus on addressing the previous open questions, as well as on creating robust implementations that solve various convex optimization problems. We discuss in detail the effect of regularization, and derive two different regularization strategies; one based on the proximal method of multipliers, and another one based on a Bregman proximal point method. The latter tends to be more efficient, while the former is more robust and has better convergence guarantees. In addition, we discuss the use of iterative linear algebra within the presented algorithms, by proposing some general purpose preconditioning strategies (used to accelerate the iterative schemes) that take advantage of the regularized nature of the systems being solved. In Chapter 2 we present a dynamic non-diagonal regularization for IPMs. The non-diagonal aspect of this regularization is implicit, since all the off-diagonal elements of the regularization matrices are cancelled out by those elements present in the Newton system, which do not contribute important information in the computation of the Newton direction. Such a regularization, which can be interpreted as the application of a Bregman proximal point method, has multiple goals. The obvious one is to improve the spectral properties of the Newton system solved at each IPM iteration. On the other hand, the regularization matrices introduce sparsity to the aforementioned linear system, allowing for more efficient factorizations. We propose a rule for tuning the regularization dynamically based on the properties of the problem, such that sufficiently large eigenvalues of the non-regularized system are perturbed insignificantly. This alleviates the need of finding specific regularization values through experimentation, which is the most common approach in the literature. We provide perturbation bounds for the eigenvalues of the non-regularized system matrix, and then discuss the spectral properties of the regularized matrix. Finally, we demonstrate the efficiency of the method applied to solve standard small- and medium-scale linear and convex quadratic programming test problems. In Chapter 3 we combine an IPM with the proximal method of multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving linearly constrained convex quadratic programming problems. We apply few iterations of the interior point method to each sub-problem of the proximal method of multipliers. Once a satisfactory solution of the PMM sub-problem is found, we update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under standard assumptions. To our knowledge, this is the first polynomial complexity result for a primal-dual regularized IPM. The algorithm is guided by the use of a single penalty parameter; that of the logarithmic barrier. In other words, we show that IP-PMM inherits the polynomial complexity of IPMs, as well as the strong convexity of the PMM sub-problems. The updates of the penalty parameter are controlled by IPM, and hence are well-tuned, and do not depend on the problem solved. Furthermore, we study the behavior of the method when it is applied to an infeasible problem, and identify a necessary condition for infeasibility. The latter is used to construct an infeasibility detection mechanism. Subsequently, we provide a robust implementation of the presented algorithm and test it over a set of small to large scale linear and convex quadratic programming problems, demonstrating the benefits of using regularization in IPMs as well as the reliability of the approach. In Chapter 4 we extend IP-PMM to the case of linear semi-definite programming (SDP) problems. In particular, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which can be used as a basis for constructing detection mechanisms for identifying pathological cases within IP-PMM. In Chapter 5 we present general-purpose preconditioners for regularized Newton systems arising within regularized interior point methods. We discuss positive definite preconditioners, suitable for iterative schemes like the conjugate gradient (CG), or the minimal residual (MINRES) method. We study the spectral properties of the preconditioned systems, and discuss the use of each presented approach, depending on the properties of the problem under consideration. All preconditioning strategies are numerically tested on various medium- to large-scale problems coming from standard test sets, as well as problems arising from partial differential equation (PDE) optimization. In Chapter 6 we apply specialized regularized IPM variants to problems arising from portfolio optimization, machine learning, image processing, and statistics. Such problems are usually solved by specialized first-order approaches. The efficiency of the proposed regularized IPM variants is confirmed by comparing them against problem-specific state--of--the--art first-order alternatives given in the literature. Finally, in Chapter 7 we present some conclusions as well as open questions, and possible future research directions

    High performance Cholesky and symmetric indefinite factorizations with applications

    Get PDF
    The process of factorizing a symmetric matrix using the Cholesky (LLT ) or indefinite (LDLT ) factorization of A allows the efficient solution of systems Ax = b when A is symmetric. This thesis describes the development of new serial and parallel techniques for this problem and demonstrates them in the setting of interior point methods. In serial, the effects of various scalings are reported, and a fast and robust mixed precision sparse solver is developed. In parallel, DAG-driven dense and sparse factorizations are developed for the positive definite case. These achieve performance comparable with other world-leading implementations using a novel algorithm in the same family as those given by Buttari et al. for the dense problem. Performance of these techniques in the context of an interior point method is assessed

    Using Interior Point Methods for Large-scale Support Vector Machine training

    Get PDF
    Support Vector Machines (SVMs) are powerful machine learning techniques for classification and regression, but the training stage involves a convex quadratic optimization program that is most often computationally expensive. Traditionally, active-set methods have been used rather than interior point methods, due to the Hessian in the standard dual formulation being completely dense. But as active-set methods are essentially sequential, they may not be adequate for machine learning challenges of the future. Additionally, training time may be limited, or data may grow so large that cluster-computing approaches need to be considered. Interior point methods have the potential to answer these concerns directly. They scale efficiently, they can provide good early approximations, and they are suitable for parallel and multi-core environments. To apply them to SVM training, it is necessary to address directly the most computationally expensive aspect of the algorithm. We therefore present an exact reformulation of the standard linear SVM training optimization problem that exploits separability of terms in the objective. By so doing, per-iteration computational complexity is reduced from O(n3) to O(n). We show how this reformulation can be applied to many machine learning problems in the SVM family. Implementation issues relating to specializing the algorithm are explored through extensive numerical experiments. They show that the performance of our algorithm for large dense or noisy data sets is consistent and highly competitive, and in some cases can out perform all other approaches by a large margin. Unlike active set methods, performance is largely unaffected by noisy data. We also show how, by exploiting the block structure of the augmented system matrix, a hybrid MPI/Open MP implementation of the algorithm enables data and linear algebra computations to be efficiently partitioned amongst parallel processing nodes in a clustered computing environment. The applicability of our technique is extended to nonlinear SVMs by low-rank approximation of the kernel matrix. We develop a heuristic designed to represent clusters using a small number of features. Additionally, an early approximation scheme reduces the number of samples that need to be considered. Both elements improve the computational efficiency of the training phase. Taken as a whole, this thesis shows that with suitable problem formulation and efficient implementation techniques, interior point methods are a viable optimization technology to apply to large-scale SVM training, and are able to provide state-of-the-art performance

    Semidefinite Programming. methods and algorithms for energy management

    Get PDF
    La prĂ©sente thĂšse a pour objet d explorer les potentialitĂ©s d une mĂ©thode prometteuse de l optimisation conique, la programmation semi-dĂ©finie positive (SDP), pour les problĂšmes de management d Ă©nergie, Ă  savoir relatifs Ă  la satisfaction des Ă©quilibres offre-demande Ă©lectrique et gazier.Nos travaux se dĂ©clinent selon deux axes. Tout d abord nous nous intĂ©ressons Ă  l utilisation de la SDP pour produire des relaxations de problĂšmes combinatoires et quadratiques. Si une relaxation SDP dite standard peut ĂȘtre Ă©laborĂ©e trĂšs simplement, il est gĂ©nĂ©ralement souhaitable de la renforcer par des coupes, pouvant ĂȘtre dĂ©terminĂ©es par l'Ă©tude de la structure du problĂšme ou Ă  l'aide de mĂ©thodes plus systĂ©matiques. Nous mettons en Ɠuvre ces deux approches sur diffĂ©rentes modĂ©lisations du problĂšme de planification des arrĂȘts nuclĂ©aires, rĂ©putĂ© pour sa difficultĂ© combinatoire. Nous terminons sur ce sujet par une expĂ©rimentation de la hiĂ©rarchie de Lasserre, donnant lieu Ă  une suite de SDP dont la valeur optimale tend vers la solution du problĂšme initial.Le second axe de la thĂšse porte sur l'application de la SDP Ă  la prise en compte de l'incertitude. Nous mettons en Ɠuvre une approche originale dĂ©nommĂ©e optimisation distributionnellement robuste , pouvant ĂȘtre vue comme un compromis entre optimisation stochastique et optimisation robuste et menant Ă  des approximations sous forme de SDP. Nous nous appliquons Ă  estimer l'apport de cette approche sur un problĂšme d'Ă©quilibre offre-demande avec incertitude. Puis, nous prĂ©sentons une relaxation SDP pour les problĂšmes MISOCP. Cette relaxation se rĂ©vĂšle ĂȘtre de trĂšs bonne qualitĂ©, tout en ne nĂ©cessitant qu un temps de calcul raisonnable. La SDP se confirme donc ĂȘtre une mĂ©thode d optimisation prometteuse qui offre de nombreuses opportunitĂ©s d'innovation en management d Ă©nergie.The present thesis aims at exploring the potentialities of a powerful optimization technique, namely Semidefinite Programming, for addressing some difficult problems of energy management. We pursue two main objectives. The first one consists of using SDP to provide tight relaxations of combinatorial and quadratic problems. A first relaxation, called standard can be derived in a generic way but it is generally desirable to reinforce them, by means of tailor-made tools or in a systematic fashion. These two approaches are implemented on different models of the Nuclear Outages Scheduling Problem, a famous combinatorial problem. We conclude this topic by experimenting the Lasserre's hierarchy on this problem, leading to a sequence of semidefinite relaxations whose optimal values tends to the optimal value of the initial problem.The second objective deals with the use of SDP for the treatment of uncertainty. We investigate an original approach called distributionnally robust optimization , that can be seen as a compromise between stochastic and robust optimization and admits approximations under the form of a SDP. We compare the benefits of this method w.r.t classical approaches on a demand/supply equilibrium problem. Finally, we propose a scheme for deriving SDP relaxations of MISOCP and we report promising computational results indicating that the semidefinite relaxation improves significantly the continuous relaxation, while requiring a reasonable computational effort.SDP therefore proves to be a promising optimization method that offers great opportunities for innovation in energy management.PARIS11-SCD-Bib. Ă©lectronique (914719901) / SudocSudocFranceF

    Extended Trust-Tech Methodology For Nonlinear Optimization: Analyses, Methods And Applications

    Full text link
    Many theoretical and practical problems can be formulated as a global optimization problem. Traditional local optimization methods can only attain a local optimal solution and be entrapped in the local optimal solution; while existing global optimization algorithms usually sparsely approximates the global optimal solution in a stochastic manner. In contrast, the transformation under stability-retaining equilibrium characterization (TRUST-TECH) methodology prevails over existing algorithms due to its capability of locating multiple, if not all, local optimal solutions to the optimization problem deterministically and systematically in a tier-by-tier manner. The TRUST-TECH methodology was developed to solve unconstrained and constrained nonlinear optimization problems. This work extends the TRUST-TECH methodology by incorporating new analytical results, developing new solution methods and solving new problems in practical applications. This work first provides analytical results regarding the invariance of partial stability region in quasi-gradient systems. Our motivation is to resolve numerical difficulties arising in implementations of trajectory based methods, including TRUST-TECH. Improved algorithms were developed to resolve these issues by altering the original problem to speed-up movement of the trajectory. However, such operations can lead the trajectory converge to a different solution, which could be undesired under specific situations. This work attempts to answer the question regarding invariant convergence for a special class of numerical operations whose dynamical behaviours can be characterized by a quasi-gradient dynamical system. To this end, we study relationship between a gradient dynamical system and its associated quasi-gradient system and reveal the invariance of partial stability region in the quasi-gradient system. These analytical results lead to methods for checking invariant convergence of the trajectory starting from a given point in the quasi-gradient system and the algorithm to maintain invariant convergence. This work also develops new solution methods to enhance TRUST-TECH's capability of solving constrained nonlinear optimization problems and applies them to solve practical problems arising in different applications. Specifically, TRUST-TECH based methods are first developed for feasibility computation and restoration and are applied to power system applications, including power flow computation and feasibility restoration for infeasible optimal power flow problems. Indeed, a unified framework based on TRUST-TECH is introduced for analysing feasibility and infeasibility for nonlinear problems. Secondly, the TRUST-TECH based interior point method (TT-IPM) and the reduced projected gradient method are developed to better tackle constrained nonlinear optimization problems. As application, the TT-IPM method is used to solve mixed-integer nonlinear programs (MINLPs). Finally, this work develops the ensemble of optimal, input-pruned neural networks using TRUST-TECH (ELITE) method for constructing high-quality neural network ensembles and applies ELITE to build a short-term load forecaster named ELITE-STLF with promising performance. Possible extensions of the TRUST-TECH methodology to a much broader range of optimization models, including multi-objective optimization and variational optimization, are suggested for future research efforts

    Multistage quadratic stochastic programming

    Full text link
    Multistage stochastic programming is an important tool in medium to long term planning where there are uncertainties in the data. In this thesis, we consider a special case of multistage stochastic programming in which each subprogram is a convex quadratic program. The results are also applicable if the quadratic objectives are replaced by convex piecewise quadratic functions. Convex piecewise quadratic functions have important application in financial planning problems as they can be used as very flexible risk measures. The stochastic programming problems can be used as multi-period portfolio planning problems tailored to the need of individual investors. Using techniques from convex analysis and sensitivity analysis, we show that each subproblem of a multistage quadratic stochastic program is a polyhedral piecewise quadratic program with convex Lipschitz objective. The objective of any subproblem is differentiable with Lipschitz gradient if all its descendent problems have unique dual variables, which can be guaranteed if the linear independence constraint qualification is satisfied. Expression for arbitrary elements of the subdifferential and generalized Hessian at a point can be calculated for quadratic pieces that are active at the point. Generalized Newton methods with linesearch are proposed for solving multistage quadratic stochastic programs. The algorithms converge globally. If the piecewise quadratic objective is differentiable and strictly convex at the solution, then convergence is also finite. A generalized Newton algorithm is implemented in Matlab. Numerical experiments have been carried out to demonstrate its effectiveness. The algorithm is tested on random data with 3, 4 and 5 stages with a maximum of 315 scenarios. The algorithm has also been successfully applied to two sets of test data from a capacity expansion problem and a portfolio management problem. Various strategies have been implemented to improve the efficiency of the proposed algorithm. We experimented with trust region methods with different parameters, using an advanced solution from a smaller version of the original problem and sorting the stochastic right hand sides to encourage faster convergence. The numerical results show that the proposed generalized Newton method is a highly accurate and effective method for multistage quadratic stochastic programs. For problems with the same number of stages, solution times increase linearly with the number of scenarios

    Advances in interior point methods and column generation

    Get PDF
    In this thesis we study how to efficiently combine the column generation technique (CG) and interior point methods (IPMs) for solving the relaxation of a selection of integer programming problems. In order to obtain an efficient method a change in the column generation technique and a new reoptimization strategy for a primal-dual interior point method are proposed. It is well-known that the standard column generation technique suffers from unstable behaviour due to the use of optimal dual solutions that are extreme points of the restricted master problem (RMP). This unstable behaviour slows down column generation so variations of the standard technique which rely on interior points of the dual feasible set of the RMP have been proposed in the literature. Among these techniques, there is the primal-dual column generation method (PDCGM) which relies on sub-optimal and well-centred dual solutions. This technique dynamically adjusts the column generation tolerance as the method approaches optimality. Also, it relies on the notion of the symmetric neighbourhood of the central path so sub-optimal and well-centred solutions are obtained. We provide a thorough theoretical analysis that guarantees the convergence of the primal-dual approach even though sub-optimal solutions are used in the course of the algorithm. Additionally, we present a comprehensive computational study of the solution of linear relaxed formulations obtained after applying the Dantzig-Wolfe decomposition principle to the cutting stock problem (CSP), the vehicle routing problem with time windows (VRPTW), and the capacitated lot sizing problem with setup times (CLSPST). We compare the performance of the PDCGM with the standard column generation method (SCGM) and the analytic centre cutting planning method (ACCPM). Overall, the PDCGM achieves the best performance when compared to the SCGM and the ACCPM when solving challenging instances from a column generation perspective. One important characteristic of this column generation strategy is that no speci c tuning is necessary and the algorithm poses the same level of difficulty as standard column generation method. The natural stabilization available in the PDCGM due to the use of sub-optimal well-centred interior point solutions is a very attractive feature of this method. Moreover, the larger the instance, the better is the relative performance of the PDCGM in terms of column generation iterations and CPU time. The second part of this thesis is concerned with the development of a new warmstarting strategy for the PDCGM. It is well known that taking advantage of the previously solved RMP could lead to important savings in solving the modified RMP. However, this is still an open question for applications arising in an integer optimization context and the PDCGM. Despite the current warmstarting strategy in the PDCGM working well in practice, it does not guarantee full feasibility restorations nor considers the quality of the warmstarted iterate after new columns are added. The main motivation of the design of the new warmstarting strategy presented in this thesis is to close this theoretical gap. Under suitable assumptions, the warmstarting procedure proposed in this thesis restores primal and dual feasibilities after the addition of new columns in one step. The direction is determined so that the modi cation of small components at a particular solution is not large. Additionally, the strategy enables control over the new duality gap by considering an expanded symmetric neighbourhood of the central path. As observed from our computational experiments solving CSP and VRPTW, one can conclude that the warmstarting strategies for the PDCGM are useful when dense columns are added to the RMP (CSP), since they consistently reduce the CPU time and also the number of iterations required to solve the RMPs on average. On the other hand, when sparse columns are added (VRPTW), the coldstart used by the interior point solver HOPDM becomes very efficient so warmstarting does not make the task of solving the RMPs any easier
    corecore