12 research outputs found

    Difference of Convex programming in adversarial SVM

    Get PDF
    We present two models in adversarial machine learning, focussing on the Support Vector Machine framework. In particular, we consider both an evasion and a poisoning problem. The first model is aimed at constructing effective sparse perturbation of the dataset samples, while the objective of the second is to induce a substantial rotation of the hyperplane defining the classifier. The two models are formulated as Difference of Convex nonsmooth optimization problems. Numerical results on both synthetic and real life datasets are reported

    Polyhedral separation via difference of convex (DC) programming

    Get PDF
    We consider polyhedral separation of sets as a possible tool in supervised classification. In particular, we focus on the optimization model introduced by Astorino and Gaudioso (J Optim Theory Appl 112(2):265–293, 2002) and adopt its reformulation in difference of convex (DC) form. We tackle the problem by adapting the algorithm for DC programming known as DCA. We present the results of the implementation of DCA on a number of benchmark classification datasets

    Conic separation of finite sets. I. The homogeneous case

    No full text
    Abstract. This work addresses the issue of separating two finite sets in R n by means of a suitable revolution cone The specific challenge at hand is to determine the aperture coefficient s, the axis y, and the apex z of the cone. These parameters have to be selected in such a way as to meet certain optimal separation criteria. Part I of this work focusses on the homogeneous case in which the apex of the revolution cone is the origin of the space. The homogeneous case deserves a separated treatment, not just because of its intrinsic interest, but also because it helps to built up the general theory. Part II of this work concerns the non-homogeneous case in which the apex of the cone can move in some admissible region. The non-homogeneous case is structurally more involved and leads to challenging nonconvex nonsmooth optimization problems. Mathematics Subject Classification: 90C25, 90C26

    An illumination problem with tradeoff between coverage of a dataset and aperture angle of a conic light beam

    No full text
    International audienc

    Ellipsoidal classification via semidefinite programming

    No full text
    We propose a classification approach exploiting relationships between ellipsoidal separation and Support-vector Machine (SVM) with quadratic kernel. By adding a (Semidefinite Programming) SDP constraint to SVM model we ensure that the chosen hyperplane in feature space represents a non-degenerate ellipsoid in input space. This allows us to exploit SDP techniques within Support-vector Regression (SVR) approaches, yielding better results in case ellipsoid-shaped separators are appropriate for classification tasks. We compare our approach with spherical separation and SVM on some classification problems

    SVM-Based Multiple Instance Classification via DC Optimization

    No full text
    A multiple instance learning problem consists of categorizing objects, each represented as a set (bag) of points. Unlike the supervised classification paradigm, where each point of the training set is labeled, the labels are only associated with bags, while the labels of the points inside the bags are unknown. We focus on the binary classification case, where the objective is to discriminate between positive and negative bags using a separating surface. Adopting a support vector machine setting at the training level, the problem of minimizing the classification-error function can be formulated as a nonconvex nonsmooth unconstrained program. We propose a difference-of-convex (DC) decomposition of the nonconvex function, which we face using an appropriate nonsmooth DC algorithm. Some of the numerical results on benchmark data sets are reported

    Ellipsoidal Classification via SemiDefinite Programming

    Get PDF
    Separating two finite sets of points in a Euclidean space is a fundamental problem in classification. Customarily linear separation is used, but nonlinear separators such as spheres have been shown to have better performances in some tasks, such as edge detection in images. We exploit the relationships between the more general version of the spherical separation, where we use general ellipsoids, and the SVM model with quadratic kernel to propose a new classification approach. The implementation basically boils down to adding a SDP constraint to the standard SVM model in order to ensure that the chosen hyperplane in the feature space represents a non-degenerate ellipsoid in the input space; albeit being somewhat more costly than the original formulation, this still allows to exploit many of the techniques developed for SVR in combination with SDP approaches. We test our approach on several classification tasks, among which the edge detection problem for gray-scale images, proving that the approach is competitive with both the spherical classification one and the quadratic-kernel SVM one without the ellipsoidal restriction

    A Nonmonotone Proximal Bundle Method With (Potentially) Continuous Step Decisions

    Get PDF
    We discuss a numerical algorithm for minimization of a convex nondifferentiable function belonging to the family of proximal bundle methods. Unlike all of its brethren, the approach does not rely on measuring descent of the objective function at the so-called ``serious steps'', while ``null steps'' only serve at improving the descent direction in case of unsuccessful steps. Rather, a merit function is defined which is decreased at each iteration, leading to a (potentially) continuous choice of the stepsize between zero (the null step) and one (the serious step). By avoiding the discrete choice the convergence analysis is simplified, and we can more easily obtain efficiency estimates for the method. Simple choices for the step selection actually reproduce the dichotomic 0/1 behavior of standard proximal bundle methods, but shedding new light on the rationale behind the process, and ultimately with different rules. Yet, using nonlinear upper models of the function in the step selection process can lead to actual fractional steps
    corecore