1,036 research outputs found

    Optimality-based bound contraction with multiparametric disaggregation for the global optimization of mixed-integer bilinear problems

    Get PDF
    We address nonconvex mixed-integer bilinear problems where the main challenge is the computation of a tight upper bound for the objective function to be maximized. This can be obtained by using the recently developed concept of multiparametric disaggregation following the solution of a mixed-integer linear relaxation of the bilinear problem. Besides showing that it can provide tighter bounds than a commercial global optimization solver within a given computational time, we propose to also take advantage of the relaxed formulation for contracting the variables domain and further reduce the optimality gap. Through the solution of a real-life case study from a hydroelectric power system, we show that this can be an efficient approach depending on the problem size. The relaxed formulation from multiparametric formulation is provided for a generic numeric representation system featuring a base between 2 (binary) and 10 (decimal)

    Kernel methods with mixed data types and their applications

    Get PDF
    Support Vector Machines (SVMs) represent a category of supervised machine learning algorithms that find extensive application in both classification and regression tasks. In these algorithms, kernel functions are responsible for measuring the similarity between input samples to generate models and perform predictions. In order for SVMs to tackle data analysis tasks involving mixed data, the implementation of a valid kernel function for this purpose is required. However, in the current literature, we hardly find any kernel function specifically designed to measure similarity between mixed data. In addition, there is a complete lack of significant examples where these kernels have been practically implemented. Another notable characteristic of SVMs is their remarkable efficacy in addressing high-dimensional problems. However, they can become inefficient when dealing with large volumes of data. In this project, we propose the formulation of a kernel function capable of accurately capturing the similarity between samples of mixed data. We also present an SVM algorithm based on Bagging techniques that enables efficient analysis of large volumes of data. Additionally, we implement both proposals in an updated version of the successful SVM library LIBSVM. Moreover, we evaluate their effectiveness, robustness and efficiency, obtaining promising results

    Constrained Polynomial Likelihood

    Full text link
    Starting from a distribution zz, we develop a non-negative polynomial minimum-norm likelihood ratio ξ\xi such that dp=ξdzdp=\xi dz satisfies a certain type of shape restrictions. The coefficients of the polynomial are the unique solution of a mixed conic semi-definite program. The approach is widely applicable. For example, it can be used to incorporate expert opinion into a model, or as an objective function in machine learning algorithms
    corecore