667 research outputs found

    Global Minimization of Nonsmooth Constrained Global Optimization with Filled Function

    Get PDF
    A novel filled function is constructed to locate a global optimizer or an approximate global optimizer of smooth or nonsmooth constrained global minimization problems. The constructed filled function contains only one parameter which can be easily adjusted during the minimization. The theoretical properties of the filled function are discussed and a corresponding solution algorithm is proposed. The solution algorithm comprises two phases: local minimization and filling. The first phase minimizes the original problem and obtains one of its local optimizers, while the second phase minimizes the constructed filled function and identifies a better initial point for the first phase. Some preliminary numerical results are also reported

    Nonsmooth Newton methods for set-valued saddle point problems

    Get PDF
    We present a new class of iterative schemes for large scale set-valued saddle point problems as arising, e.g., from optimization problems in the presence of linear and inequality constraints. Our algorithms can be regarded either as nonsmooth Newton-type methods for the nonlinear Schur complement or as Uzawa-type iterations with active set preconditioners. Numerical experiments with a control constrained optimal control problem and a discretized Cahn–Hilliard equation with obstacle potential illustrate the reliability and efficiency of the new approach

    Economic inexact restoration for derivative-free expensive function minimization and applications

    Full text link
    The Inexact Restoration approach has proved to be an adequate tool for handling the problem of minimizing an expensive function within an arbitrary feasible set by using different degrees of precision in the objective function. The Inexact Restoration framework allows one to obtain suitable convergence and complexity results for an approach that rationally combines low- and high-precision evaluations. In the present research, it is recognized that many problems with expensive objective functions are nonsmooth and, sometimes, even discontinuous. Having this in mind, the Inexact Restoration approach is extended to the nonsmooth or discontinuous case. Although optimization phases that rely on smoothness cannot be used in this case, basic convergence and complexity results are recovered. A derivative-free optimization phase is defined and the subproblems that arise at this phase are solved using a regularization approach that take advantage of different notions of stationarity. The new methodology is applied to the problem of reproducing a controlled experiment that mimics the failure of a dam

    Finding global minimum using filled function method

    Get PDF
    Filled function method is an optimization method for finding global minimizers. Filled function method is a combination of a local search in findings local solutions as well as global solution. It is basically a construction and eventually the inclusion of an auxiliary function called the filled function into the algorithm. Optimizing the objective function at an initial point will only yield a local minimizer. By using the auxiliary function, the local minimizer is shifted to a new lower basin of the objective function. The shifted point is the new initial solution for the local search to find the next local minimizer, where the function value is lower. The process continued until the global minimizer is achieved. This research used several test functions to examine the effectiveness of the method in finding global solution. The results show that this method works successfully and further research directions are discussed

    Oscar : Optimal subset cardinality regression using the L0-pseudonorm with applications to prognostic modelling of prostate cancer

    Get PDF
    Author summaryFeature subset selection has become a crucial part of building biomedical models, due to the abundance of available predictors in many applications, yet there remains an uncertainty of their importance and generalization ability. Regularized regression methods have become popular approaches to tackle this challenge by balancing the model goodness-of-fit against the increasing complexity of the model in terms of coefficients that deviate from zero. Regularization norms are pivotal in formulating the model complexity, and currently L-1-norm (LASSO), L-2-norm (Ridge Regression) and their hybrid (Elastic Net) dominate the field. In this paper, we present a novel methodology that is based on the L-0-pseudonorm, also known as the best subset selection, which has largely gone overlooked due to its challenging discrete nature. Our methodology makes use of a continuous transformation of the discrete optimization problem, and provides effective solvers implemented in a user friendly R software package. We exemplify the use of oscar-package in the context of prostate cancer prognostic prediction using both real-world hospital registry and clinical cohort data. By benchmarking the methodology against existing regularization methods, we illustrate the advantages of the L-0-pseudonorm for better clinical applicability, selection of grouped features, and demonstrate its applicability in high-dimensional transcriptomics datasets.In many real-world applications, such as those based on electronic health records, prognostic prediction of patient survival is based on heterogeneous sets of clinical laboratory measurements. To address the trade-off between the predictive accuracy of a prognostic model and the costs related to its clinical implementation, we propose an optimized L-0-pseudonorm approach to learn sparse solutions in multivariable regression. The model sparsity is maintained by restricting the number of nonzero coefficients in the model with a cardinality constraint, which makes the optimization problem NP-hard. In addition, we generalize the cardinality constraint for grouped feature selection, which makes it possible to identify key sets of predictors that may be measured together in a kit in clinical practice. We demonstrate the operation of our cardinality constraint-based feature subset selection method, named OSCAR, in the context of prognostic prediction of prostate cancer patients, where it enables one to determine the key explanatory predictors at different levels of model sparsity. We further explore how the model sparsity affects the model accuracy and implementation cost. Lastly, we demonstrate generalization of the presented methodology to high-dimensional transcriptomics data.Peer reviewe

    Quantifying uncertainty with ensembles of surrogates for blackbox optimization

    Full text link
    This work is in the context of blackbox optimization where the functions defining the problem are expensive to evaluate and where no derivatives are available. A tried and tested technique is to build surrogates of the objective and the constraints in order to conduct the optimization at a cheaper computational cost. This work proposes different uncertainty measures when using ensembles of surrogates. The resulting combination of an ensemble of surrogates with our measures behaves as a stochastic model and allows the use of efficient Bayesian optimization tools. The method is incorporated in the search step of the mesh adaptive direct search (MADS) algorithm to improve the exploration of the search space. Computational experiments are conducted on seven analytical problems, two multi-disciplinary optimization problems and two simulation problems. The results show that the proposed approach solves expensive simulation-based problems at a greater precision and with a lower computational effort than stochastic models.Comment: 36 pages, 11 figures, submitte
    • …
    corecore