6 research outputs found

    Tight Bound for Sum of Heterogeneous Random Variables: Application to Chance Constrained Programming

    Full text link
    We study a tight Bennett-type concentration inequality for sums of heterogeneous and independent variables, defined as a one-dimensional minimization. We show that this refinement, which outperforms the standard known bounds, remains computationally tractable: we develop a polynomial-time algorithm to compute confidence bounds, proved to terminate with an epsilon-solution. From the proposed inequality, we deduce tight distributionally robust bounds to Chance-Constrained Programming problems. To illustrate the efficiency of our approach, we consider two use cases. First, we study the chance-constrained binary knapsack problem and highlight the efficiency of our cutting-plane approach by obtaining stronger solution than classical inequalities (such as Chebyshev-Cantelli or Hoeffding). Second, we deal with the Support Vector Machine problem, where the convex conservative approximation we obtain improves the robustness of the separation hyperplane, while staying computationally tractable

    Chance constrained uncertain classification via robust optimization

    No full text
    This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases

    Chance constrained uncertain classification via robust optimization

    No full text
    This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases

    Mathematical Programming Series B manuscript No. (will be inserted by the editor) Chance Constrained Uncertain Classification via Robust Optimization

    No full text
    Abstract This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain datapoints are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test datapoints and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases
    corecore