3,187 research outputs found

    Second order cone programming approaches for handling missing and uncertain data

    No full text
    We propose a novel second order cone programming formulation for designing robust classifiers which can handle uncertainty in observations. Similar formulations are also derived for designing regression functions which are robust to uncertainties in the regression setting. The proposed formulations are independent of the underlying distribution, requiring only the existence of second order moments. These formulations are then specialized to the case of missing values in observations for both classification and regression problems. Experiments show that the proposed formulations outperform imputation

    Oracle-Based Robust Optimization via Online Learning

    Full text link
    Robust optimization is a common framework in optimization under uncertainty when the problem parameters are not known, but it is rather known that the parameters belong to some given uncertainty set. In the robust optimization framework the problem solved is a min-max problem where a solution is judged according to its performance on the worst possible realization of the parameters. In many cases, a straightforward solution of the robust optimization problem of a certain type requires solving an optimization problem of a more complicated type, and in some cases even NP-hard. For example, solving a robust conic quadratic program, such as those arising in robust SVM, ellipsoidal uncertainty leads in general to a semidefinite program. In this paper we develop a method for approximately solving a robust optimization problem using tools from online convex optimization, where in every stage a standard (non-robust) optimization program is solved. Our algorithms find an approximate robust solution using a number of calls to an oracle that solves the original (non-robust) problem that is inversely proportional to the square of the target accuracy

    Classification under input uncertainty with support vector machines

    No full text
    Uncertainty can exist in any measurement of data describing the real world. Many machine learning approaches attempt to model any uncertainty in the form of additive noise on the target, which can be effective for simple models. However, for more complex models, and where a richer description of anisotropic uncertainty is available, these approaches can suffer. The principal focus of this thesis is the development of advanced classification approaches that can incorporate the known input uncertainties into support vector machines (SVMs), which can accommodate isotropic uncertain information in the classification. This new method is termed as uncertainty support vector classification (USVC). Kernel functions can be used as well through the derivation of a novel kernelisation formulation to generalise this proposed technique to non-linear models and the resulting optimisation problem is a second order cone program (SOCP) with a unique solution. Based on the statistical models on the input uncertainty, Bi and Zhang (2005) developed total support vector classification (TSVC), which has a similar geometric interpretation and optimisation formulation to USVC, but chooses much lower probabilities that the corresponding original inputs are going to be correctly classified by the optimal solution than USVC. Adaptive uncertainty support vector classification (AUSVC) is then developed based on the combination of TSVC and USVC, in which the probabilities of the original inputs being correctly classified are adaptively adjusted in accordance with the corresponding uncertain inputs. Inheriting the advantages from AUSVC and the minimax probability machine (MPM), minimax probability support vector classification (MPSVC) is developed to maximise the probabilities of the original inputs being correctly classified. Statistical tests are used to evaluate the experimental results of different approaches. Experiments illustrate that AUSVC and MPSVC are suitable for classifying the observed uncertain inputs and recovering the true target function respectively since the contamination is normally unknown for the learner

    Robustness and Regularization of Support Vector Machines

    Full text link
    We consider regularized support vector machines (SVMs) and show that they are precisely equivalent to a new robust optimization formulation. We show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. In terms of algorithms, the equivalence suggests more general SVM-like algorithms for classification that explicitly build in protection to noise, and at the same time control overfitting. On the analysis front, the equivalence of robustness and regularization, provides a robust optimization interpretation for the success of regularized SVMs. We use the this new robustness interpretation of SVMs to give a new proof of consistency of (kernelized) SVMs, thus establishing robustness as the reason regularized SVMs generalize well

    Regularization and Kernelization of the Maximin Correlation Approach

    Full text link
    Robust classification becomes challenging when each class consists of multiple subclasses. Examples include multi-font optical character recognition and automated protein function prediction. In correlation-based nearest-neighbor classification, the maximin correlation approach (MCA) provides the worst-case optimal solution by minimizing the maximum misclassification risk through an iterative procedure. Despite the optimality, the original MCA has drawbacks that have limited its wide applicability in practice. That is, the MCA tends to be sensitive to outliers, cannot effectively handle nonlinearities in datasets, and suffers from having high computational complexity. To address these limitations, we propose an improved solution, named regularized maximin correlation approach (R-MCA). We first reformulate MCA as a quadratically constrained linear programming (QCLP) problem, incorporate regularization by introducing slack variables in the primal problem of the QCLP, and derive the corresponding Lagrangian dual. The dual formulation enables us to apply the kernel trick to R-MCA so that it can better handle nonlinearities. Our experimental results demonstrate that the regularization and kernelization make the proposed R-MCA more robust and accurate for various classification tasks than the original MCA. Furthermore, when the data size or dimensionality grows, R-MCA runs substantially faster by solving either the primal or dual (whichever has a smaller variable dimension) of the QCLP.Comment: Submitted to IEEE Acces

    A Categorical Model for Faceted Ontologies with Data Repositories

    Get PDF

    Supervised classification and mathematical optimization

    Get PDF
    Data Mining techniques often ask for the resolution of optimization problems. Supervised Classification, and, in particular, Support Vector Machines, can be seen as a paradigmatic instance. In this paper, some links between Mathematical Optimization methods and Supervised Classification are emphasized. It is shown that many different areas of Mathematical Optimization play a central role in off-the-shelf Supervised Classification methods. Moreover, Mathematical Optimization turns out to be extremely useful to address important issues in Classification, such as identifying relevant variables, improving the interpretability of classifiers or dealing with vagueness/noise in the data.Ministerio de Ciencia e InnovaciónJunta de Andalucí

    Supervised Classification and Mathematical Optimization

    Get PDF
    Data Mining techniques often ask for the resolution of optimization problems. Supervised Classification, and, in particular, Support Vector Machines, can be seen as a paradigmatic instance. In this paper, some links between Mathematical Optimization methods and Supervised Classification are emphasized. It is shown that many different areas of Mathematical Optimization play a central role in off-the-shelf Supervised Classification methods. Moreover, Mathematical Optimization turns out to be extremely useful to address important issues in Classification, such as identifying relevant variables, improving the interpretability of classifiers or dealing with vagueness/noise in the data
    corecore