10,357 research outputs found

    Robust regression with optimisation heuristics

    Get PDF
    Linear regression is widely-used in finance. While the standard method to obtain parameter estimates, Least Squares, has very appealing theoretical and numerical properties, obtained estimates are often unstable in the presence of extreme observations which are rather common in financial time series. One approach to deal with such extreme observations is the application of robust or resistant estimators, like Least Quantile of Squares estimators. Unfortunately, for many such alternative approaches, the estimation is much more difficult than in the Least Squares case, as the objective function is not convex and often has many local optima. We apply different heuristic methods like Differential Evolution, Particle Swarm and Threshold Accepting to obtain parameter estimates. Particular emphasis is put on the convergence properties of these techniques for fixed computational resources, and the techniques’ sensitivity for different parameter settings.Optimisation heuristics, Robust Regression, Least Median of Squares

    An adaptive weighted least square support vector regression for hysteresis in piezoelectric actuators

    Full text link
    © 2017 Elsevier B.V. To overcome the low positioning accuracy of piezoelectric actuators (PZAs) caused by the hysteresis nonlinearity, this paper proposes an adaptive weighted least squares support vector regression (AWLSSVR) to model the rate-dependent hysteresis of PZA. Firstly, the AWLSSVR hyperparameters are optimized by using particle swarm optimization. Then an adaptive weighting strategy is proposed to eliminate the effects of noises in the training dataset and reduce the sample size at the same time. Finally, the proposed approach is applied to predict the hysteresis of PZA. The results show that the proposed method is more accurate than other versions of least squares support vector regression for training samples with noises, and meanwhile reduces the sample size and speeds up calculation

    A survey of outlier detection methodologies

    Get PDF
    Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review

    Extension of Sparse Randomized Kaczmarz Algorithm for Multiple Measurement Vectors

    Full text link
    The Kaczmarz algorithm is popular for iteratively solving an overdetermined system of linear equations. The traditional Kaczmarz algorithm can approximate the solution in few sweeps through the equations but a randomized version of the Kaczmarz algorithm was shown to converge exponentially and independent of number of equations. Recently an algorithm for finding sparse solution to a linear system of equations has been proposed based on weighted randomized Kaczmarz algorithm. These algorithms solves single measurement vector problem; however there are applications were multiple-measurements are available. In this work, the objective is to solve a multiple measurement vector problem with common sparse support by modifying the randomized Kaczmarz algorithm. We have also modeled the problem of face recognition from video as the multiple measurement vector problem and solved using our proposed technique. We have compared the proposed algorithm with state-of-art spectral projected gradient algorithm for multiple measurement vectors on both real and synthetic datasets. The Monte Carlo simulations confirms that our proposed algorithm have better recovery and convergence rate than the MMV version of spectral projected gradient algorithm under fairness constraints

    Heuristic Optimisation in Financial Modelling

    Get PDF
    There is a large number of optimisation problems in theoretical and applied finance that are difficult to solve as they exhibit multiple local optima or are not ‘well- behaved’ in other ways (eg, discontinuities in the objective function). One way to deal with such problems is to adjust and to simplify them, for instance by dropping constraints, until they can be solved with standard numerical methods. This paper argues that an alternative approach is the application of optimisation heuristics like Simulated Annealing or Genetic Algorithms. These methods have been shown to be capable to handle non-convex optimisation problems with all kinds of constraints. To motivate the use of such techniques in finance, the paper presents several actual problems where classical methods fail. Next, several well-known heuristic techniques that may be deployed in such cases are described. Since such presentations are quite general, the paper describes in some detail how a particular problem, portfolio selection, can be tackled by a particular heuristic method, Threshold Accepting. Finally, the stochastics of the solutions obtained from heuristics are discussed. It is shown, again for the example from portfolio selection, how this random character of the solutions can be exploited to inform the distribution of computations.Optimisation heuristics, Financial Optimisation, Portfolio Optimisation
    • 

    corecore