17 research outputs found

    Oracle-Based Robust Optimization via Online Learning

    Full text link
    Robust optimization is a common framework in optimization under uncertainty when the problem parameters are not known, but it is rather known that the parameters belong to some given uncertainty set. In the robust optimization framework the problem solved is a min-max problem where a solution is judged according to its performance on the worst possible realization of the parameters. In many cases, a straightforward solution of the robust optimization problem of a certain type requires solving an optimization problem of a more complicated type, and in some cases even NP-hard. For example, solving a robust conic quadratic program, such as those arising in robust SVM, ellipsoidal uncertainty leads in general to a semidefinite program. In this paper we develop a method for approximately solving a robust optimization problem using tools from online convex optimization, where in every stage a standard (non-robust) optimization program is solved. Our algorithms find an approximate robust solution using a number of calls to an oracle that solves the original (non-robust) problem that is inversely proportional to the square of the target accuracy

    Quantum-Inspired Sublinear Algorithm for Solving Low-Rank Semidefinite Programming

    Get PDF
    Semidefinite programming (SDP) is a central topic in mathematical optimization with extensive studies on its efficient solvers. In this paper, we present a proof-of-principle sublinear-time algorithm for solving SDPs with low-rank constraints; specifically, given an SDP with mm constraint matrices, each of dimension nn and rank rr, our algorithm can compute any entry and efficient descriptions of the spectral decomposition of the solution matrix. The algorithm runs in time O(mpoly(logn,r,1/ε))O(m\cdot\mathrm{poly}(\log n,r,1/\varepsilon)) given access to a sampling-based low-overhead data structure for the constraint matrices, where ε\varepsilon is the precision of the solution. In addition, we apply our algorithm to a quantum state learning task as an application. Technically, our approach aligns with 1) SDP solvers based on the matrix multiplicative weight (MMW) framework by Arora and Kale [TOC '12]; 2) sampling-based dequantizing framework pioneered by Tang [STOC '19]. In order to compute the matrix exponential required in the MMW framework, we introduce two new techniques that may be of independent interest: \bullet Weighted sampling: assuming sampling access to each individual constraint matrix A1,,AτA_{1},\ldots,A_{\tau}, we propose a procedure that gives a good approximation of A=A1++AτA=A_{1}+\cdots+A_{\tau}. \bullet Symmetric approximation: we propose a sampling procedure that gives the \emph{spectral decomposition} of a low-rank Hermitian matrix AA. To the best of our knowledge, this is the first sampling-based algorithm for spectral decomposition, as previous works only give singular values and vectors.Comment: 37 pages, 1 figure. To appear in the Proceedings of the 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020

    Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference

    Full text link
    We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF inference problems. The core of our method is a very efficient bounding procedure, which combines scalable semidefinite programming (SDP) and a cutting-plane method for seeking violated constraints. In order to further speed up the computation, several strategies have been exploited, including model reduction, warm start and removal of inactive constraints. We analyze the performance of the proposed method under different settings, and demonstrate that our method either outperforms or performs on par with state-of-the-art approaches. Especially when the connectivities are dense or when the relative magnitudes of the unary costs are low, we achieve the best reported results. Experiments show that the proposed algorithm achieves better approximation than the state-of-the-art methods within a variety of time budgets on challenging non-submodular MAP-MRF inference problems.Comment: 21 page

    Stability Yields Sublinear Time Algorithms for Geometric Optimization in Machine Learning

    Get PDF
    In this paper, we study several important geometric optimization problems arising in machine learning. First, we revisit the Minimum Enclosing Ball (MEB) problem in Euclidean space ?^d. The problem has been extensively studied before, but real-world machine learning tasks often need to handle large-scale datasets so that we cannot even afford linear time algorithms. Motivated by the recent developments on beyond worst-case analysis, we introduce the notion of stability for MEB, which is natural and easy to understand. Roughly speaking, an instance of MEB is stable, if the radius of the resulting ball cannot be significantly reduced by removing a small fraction of the input points. Under the stability assumption, we present two sampling algorithms for computing radius-approximate MEB with sample complexities independent of the number of input points n. In particular, the second algorithm has the sample complexity even independent of the dimensionality d. We also consider the general case without the stability assumption. We present a hybrid algorithm that can output either a radius-approximate MEB or a covering-approximate MEB, which improves the running time and the number of passes for the previous sublinear MEB algorithms. Further, we extend our proposed notion of stability and design sublinear time algorithms for other geometric optimization problems including MEB with outliers, polytope distance, one-class and two-class linear SVMs (without or with outliers). Our proposed algorithms also work fine for kernels

    Efficient Primal-Dual Algorithms for Large-Scale Multiclass Classification

    Get PDF
    We develop efficient algorithms to train 1\ell_1-regularized linear classifiers with large dimensionality dd of the feature space, number of classes kk, and sample size nn. Our focus is on a special class of losses that includes, in particular, the multiclass hinge and logistic losses. Our approach combines several ideas: (i) passing to the equivalent saddle-point problem with a quasi-bilinear objective; (ii) applying stochastic mirror descent with a proper choice of geometry which guarantees a favorable accuracy bound; (iii) devising non-uniform sampling schemes to approximate the matrix products. In particular, for the multiclass hinge loss we propose a \textit{sublinear} algorithm with iterations performed in O(d+n+k)O(d+n+k) arithmetic operations
    corecore