2,381 research outputs found

    Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods

    Get PDF
    Here, we consider two important classes of unconstrained optimization methods: conjugate gradient methods and trust region methods. These two classes of methods are very interesting; it seems that they are never out of date. First, we consider conjugate gradient methods. We also illustrate the practical behavior of some conjugate gradient methods. Then, we study trust region methods. Considering these two classes of methods, we analyze some recent results

    Improvements to steepest descent method for multi-objective optimization

    Full text link
    In this paper, we propose a simple yet efficient strategy for improving the multi-objective steepest descent method proposed by Fliege and Svaiter (Math Methods Oper Res, 2000, 3: 479--494). The core idea behind this strategy involves incorporating a positive modification parameter into the iterative formulation of the multi-objective steepest descent algorithm in a multiplicative manner. This modification parameter captures certain second-order information associated with the objective functions. We provide two distinct methods for calculating this modification parameter, leading to the development of two improved multi-objective steepest descent algorithms tailored for solving multi-objective optimization problems. Under reasonable assumptions, we demonstrate the convergence of sequences generated by the first algorithm toward a critical point. Moreover, for strongly convex multi-objective optimization problems, we establish the linear convergence to Pareto optimality of the sequence of generated points. The performance of the new algorithms is empirically evaluated through a computational comparison on a set of multi-objective test instances. The numerical results underscore that the proposed algorithms consistently outperform the original multi-objective steepest descent algorithm

    Estimation of discrete choice models with hybrid stochastic adaptive batch size algorithms

    Full text link
    The emergence of Big Data has enabled new research perspectives in the discrete choice community. While the techniques to estimate Machine Learning models on a massive amount of data are well established, these have not yet been fully explored for the estimation of statistical Discrete Choice Models based on the random utility framework. In this article, we provide new ways of dealing with large datasets in the context of Discrete Choice Models. We achieve this by proposing new efficient stochastic optimization algorithms and extensively testing them alongside existing approaches. We develop these algorithms based on three main contributions: the use of a stochastic Hessian, the modification of the batch size, and a change of optimization algorithm depending on the batch size. A comprehensive experimental comparison of fifteen optimization algorithms is conducted across ten benchmark Discrete Choice Model cases. The results indicate that the HAMABS algorithm, a hybrid adaptive batch size stochastic method, is the best performing algorithm across the optimization benchmarks. This algorithm speeds up the optimization time by a factor of 23 on the largest model compared to existing algorithms used in practice. The integration of the new algorithms in Discrete Choice Models estimation software will significantly reduce the time required for model estimation and therefore enable researchers and practitioners to explore new approaches for the specification of choice models.Comment: 43 page

    Censored Data Regression in High-Dimension and Low-Sample Size Settings For Genomic Applications

    Get PDF
    New high-throughput technologies are generating various types of high-dimensional genomic and proteomic data and meta-data (e.g., networks and pathways) in order to obtain a systems-level understanding of various complex diseases such as human cancers and cardiovascular diseases. As the amount and complexity of the data increase and as the questions being addressed become more sophisticated, we face the great challenge of how to model such data in order to draw valid statistical and biological conclusions. One important problem in genomic research is to relate these high-throughput genomic data to various clinical outcomes, including possibly censored survival outcomes such as age at disease onset or time to cancer recurrence. We review some recently developed methods for censored data regression in the high-dimension and low-sample size setting, with emphasis on applications to genomic data. These methods include dimension reduction-based methods, regularized estimation methods such as Lasso and threshold gradient descent method, gradient descent boosting methods and nonparametric pathways-based regression models. These methods are demonstrated and compared by analysis of a data set of microarray gene expression profiles of 240 patients with diffuse large B-cell lymphoma together with follow-up survival information. Areas of further research are also presented

    The group fused Lasso for multiple change-point detection

    Get PDF
    We present the group fused Lasso for detection of multiple change-points shared by a set of co-occurring one-dimensional signals. Change-points are detected by approximating the original signals with a constraint on the multidimensional total variation, leading to piecewise-constant approximations. Fast algorithms are proposed to solve the resulting optimization problems, either exactly or approximately. Conditions are given for consistency of both algorithms as the number of signals increases, and empirical evidence is provided to support the results on simulated and array comparative genomic hybridization data

    Addressing the speed-accuracy simulation trade-off for adaptive spiking neurons

    Full text link
    The adaptive leaky integrate-and-fire (ALIF) model is fundamental within computational neuroscience and has been instrumental in studying our brains in silico\textit{in silico}. Due to the sequential nature of simulating these neural models, a commonly faced issue is the speed-accuracy trade-off: either accurately simulate a neuron using a small discretisation time-step (DT), which is slow, or more quickly simulate a neuron using a larger DT and incur a loss in simulation accuracy. Here we provide a solution to this dilemma, by algorithmically reinterpreting the ALIF model, reducing the sequential simulation complexity and permitting a more efficient parallelisation on GPUs. We computationally validate our implementation to obtain over a 50×50\times training speedup using small DTs on synthetic benchmarks. We also obtained a comparable performance to the standard ALIF implementation on different supervised classification tasks - yet in a fraction of the training time. Lastly, we showcase how our model makes it possible to quickly and accurately fit real electrophysiological recordings of cortical neurons, where very fine sub-millisecond DTs are crucial for capturing exact spike timing.Comment: 15 pages, 5 figure
    • …
    corecore