131 research outputs found

    The Cost of Uncertainty in Curing Epidemics

    Full text link
    Motivated by the study of controlling (curing) epidemics, we consider the spread of an SI process on a known graph, where we have a limited budget to use to transition infected nodes back to the susceptible state (i.e., to cure nodes). Recent work has demonstrated that under perfect and instantaneous information (which nodes are/are not infected), the budget required for curing a graph precisely depends on a combinatorial property called the CutWidth. We show that this assumption is in fact necessary: even a minor degradation of perfect information, e.g., a diagnostic test that is 99% accurate, drastically alters the landscape. Infections that could previously be cured in sublinear time now may require exponential time, or orderwise larger budget to cure. The crux of the issue comes down to a tension not present in the full information case: if a node is suspected (but not certain) to be infected, do we risk wasting our budget to try to cure an uninfected node, or increase our certainty by longer observation, at the risk that the infection spreads further? Our results present fundamental, algorithm-independent bounds that tradeoff budget required vs. uncertainty.Comment: 35 pages, 3 figure

    Robustness and Regularization of Support Vector Machines

    Full text link
    We consider regularized support vector machines (SVMs) and show that they are precisely equivalent to a new robust optimization formulation. We show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. In terms of algorithms, the equivalence suggests more general SVM-like algorithms for classification that explicitly build in protection to noise, and at the same time control overfitting. On the analysis front, the equivalence of robustness and regularization, provides a robust optimization interpretation for the success of regularized SVMs. We use the this new robustness interpretation of SVMs to give a new proof of consistency of (kernelized) SVMs, thus establishing robustness as the reason regularized SVMs generalize well

    A Convex Formulation for Mixed Regression with Two Components: Minimax Optimal Rates

    Full text link
    We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, and provide upper bounds on the recovery errors for both arbitrary noise and stochastic noise settings. We also give matching minimax lower bounds (up to log factors), showing that under certain assumptions, our algorithm is information-theoretically optimal. Our results represent the first tractable algorithm guaranteeing successful recovery with tight bounds on recovery errors and sample complexity.Comment: Added results on minimax lower bounds, which match our upper bounds on recovery errors up to log factors. Appeared in the Conference on Learning Theory (COLT), 2014. (JMLR W&CP 35 :560-604, 2014
    • …
    corecore