54,680 research outputs found

    Portfolio selection problems in practice: a comparison between linear and quadratic optimization models

    Full text link
    Several portfolio selection models take into account practical limitations on the number of assets to include and on their weights in the portfolio. We present here a study of the Limited Asset Markowitz (LAM), of the Limited Asset Mean Absolute Deviation (LAMAD) and of the Limited Asset Conditional Value-at-Risk (LACVaR) models, where the assets are limited with the introduction of quantity and cardinality constraints. We propose a completely new approach for solving the LAM model, based on reformulation as a Standard Quadratic Program and on some recent theoretical results. With this approach we obtain optimal solutions both for some well-known financial data sets used by several other authors, and for some unsolved large size portfolio problems. We also test our method on five new data sets involving real-world capital market indices from major stock markets. Our computational experience shows that, rather unexpectedly, it is easier to solve the quadratic LAM model with our algorithm, than to solve the linear LACVaR and LAMAD models with CPLEX, one of the best commercial codes for mixed integer linear programming (MILP) problems. Finally, on the new data sets we have also compared, using out-of-sample analysis, the performance of the portfolios obtained by the Limited Asset models with the performance provided by the unconstrained models and with that of the official capital market indices

    A Better Alternative to Piecewise Linear Time Series Segmentation

    Get PDF
    Time series are difficult to monitor, summarize and predict. Segmentation organizes time series into few intervals having uniform characteristics (flatness, linearity, modality, monotonicity and so on). For scalability, we require fast linear time algorithms. The popular piecewise linear model can determine where the data goes up or down and at what rate. Unfortunately, when the data does not follow a linear model, the computation of the local slope creates overfitting. We propose an adaptive time series model where the polynomial degree of each interval vary (constant, linear and so on). Given a number of regressors, the cost of each interval is its polynomial degree: constant intervals cost 1 regressor, linear intervals cost 2 regressors, and so on. Our goal is to minimize the Euclidean (l_2) error for a given model complexity. Experimentally, we investigate the model where intervals can be either constant or linear. Over synthetic random walks, historical stock market prices, and electrocardiograms, the adaptive model provides a more accurate segmentation than the piecewise linear model without increasing the cross-validation error or the running time, while providing a richer vocabulary to applications. Implementation issues, such as numerical stability and real-world performance, are discussed.Comment: to appear in SIAM Data Mining 200

    Large-scale Binary Quadratic Optimization Using Semidefinite Relaxation and Applications

    Full text link
    In computer vision, many problems such as image segmentation, pixel labelling, and scene parsing can be formulated as binary quadratic programs (BQPs). For submodular problems, cuts based methods can be employed to efficiently solve large-scale problems. However, general nonsubmodular problems are significantly more challenging to solve. Finding a solution when the problem is of large size to be of practical interest, however, typically requires relaxation. Two standard relaxation methods are widely used for solving general BQPs--spectral methods and semidefinite programming (SDP), each with their own advantages and disadvantages. Spectral relaxation is simple and easy to implement, but its bound is loose. Semidefinite relaxation has a tighter bound, but its computational complexity is high, especially for large scale problems. In this work, we present a new SDP formulation for BQPs, with two desirable properties. First, it has a similar relaxation bound to conventional SDP formulations. Second, compared with conventional SDP methods, the new SDP formulation leads to a significantly more efficient and scalable dual optimization approach, which has the same degree of complexity as spectral methods. We then propose two solvers, namely, quasi-Newton and smoothing Newton methods, for the dual problem. Both of them are significantly more efficiently than standard interior-point methods. In practice, the smoothing Newton solver is faster than the quasi-Newton solver for dense or medium-sized problems, while the quasi-Newton solver is preferable for large sparse/structured problems. Our experiments on a few computer vision applications including clustering, image segmentation, co-segmentation and registration show the potential of our SDP formulation for solving large-scale BQPs.Comment: Fixed some typos. 18 pages. Accepted to IEEE Transactions on Pattern Analysis and Machine Intelligenc

    New efficient algorithms for multiple change-point detection with kernels

    Get PDF
    Several statistical approaches based on reproducing kernels have been proposed to detect abrupt changes arising in the full distribution of the observations and not only in the mean or variance. Some of these approaches enjoy good statistical properties (oracle inequality, \ldots). Nonetheless, they have a high computational cost both in terms of time and memory. This makes their application difficult even for small and medium sample sizes (n<104n< 10^4). This computational issue is addressed by first describing a new efficient and exact algorithm for kernel multiple change-point detection with an improved worst-case complexity that is quadratic in time and linear in space. It allows dealing with medium size signals (up to n105n \approx 10^5). Second, a faster but approximation algorithm is described. It is based on a low-rank approximation to the Gram matrix. It is linear in time and space. This approximation algorithm can be applied to large-scale signals (n106n \geq 10^6). These exact and approximation algorithms have been implemented in \texttt{R} and \texttt{C} for various kernels. The computational and statistical performances of these new algorithms have been assessed through empirical experiments. The runtime of the new algorithms is observed to be faster than that of other considered procedures. Finally, simulations confirmed the higher statistical accuracy of kernel-based approaches to detect changes that are not only in the mean. These simulations also illustrate the flexibility of kernel-based approaches to analyze complex biological profiles made of DNA copy number and allele B frequencies. An R package implementing the approach will be made available on github

    On the complexity of nonlinear mixed-integer optimization

    Full text link
    This is a survey on the computational complexity of nonlinear mixed-integer optimization. It highlights a selection of important topics, ranging from incomputability results that arise from number theory and logic, to recently obtained fully polynomial time approximation schemes in fixed dimension, and to strongly polynomial-time algorithms for special cases.Comment: 26 pages, 5 figures; to appear in: Mixed-Integer Nonlinear Optimization, IMA Volumes, Springer-Verla

    Model predictive control techniques for hybrid systems

    Get PDF
    This paper describes the main issues encountered when applying model predictive control to hybrid processes. Hybrid model predictive control (HMPC) is a research field non-fully developed with many open challenges. The paper describes some of the techniques proposed by the research community to overcome the main problems encountered. Issues related to the stability and the solution of the optimization problem are also discussed. The paper ends by describing the results of a benchmark exercise in which several HMPC schemes were applied to a solar air conditioning plant.Ministerio de Eduación y Ciencia DPI2007-66718-C04-01Ministerio de Eduación y Ciencia DPI2008-0581
    corecore