420,830 research outputs found

    Parametric Polynomial Time Perceptron Rescaling Algorithm

    No full text
    Let us consider a linear feasibility problem with a possibly infinite number of inequality constraints posed in an on-line setting: an algorithm suggests a candidate solution, and the oracle either confirms its feasibility, or outputs a violated constraint vector. This model can be solved by subgradient optimisation algorithms for non-smooth functions, also known as the perceptron algorithms in the machine learning community, and its solvability depends on the problem dimension and the radius of the constraint set. The classical perceptron algorithm may have an exponential complexity in the worst case when the radius is infinitesimal [1]. To overcome this difficulty, the space dilation technique was exploited in the ellipsoid algorithm to make its running time polynomial [3]. A special case of the space dilation, the rescaling procedure is utilised in the perceptron rescaling algorithm [2] with a probabilistic approach to choosing the direction of dilation. A parametric version of the perceptron rescaling algorithm is the focus of this work. It is demonstrated that some fixed parameters of the latter algorithm (the initial estimate of the radius and the relaxation parameter) may be modified and adapted for particular problems. The generalised theoretical framework allows to determine convergence of the algorithm with any chosen set of values of these parameters, and suggests a potential way of decreasing the complexity of the algorithm which remains the subject of current research

    A Combinatorial, Strongly Polynomial-Time Algorithm for Minimizing Submodular Functions

    Full text link
    This paper presents the first combinatorial polynomial-time algorithm for minimizing submodular set functions, answering an open question posed in 1981 by Grotschel, Lovasz, and Schrijver. The algorithm employs a scaling scheme that uses a flow in the complete directed graph on the underlying set with each arc capacity equal to the scaled parameter. The resulting algorithm runs in time bounded by a polynomial in the size of the underlying set and the largest length of the function value. The paper also presents a strongly polynomial-time version that runs in time bounded by a polynomial in the size of the underlying set independent of the function value.Comment: 17 page

    Plantinga-Vegter algorithm takes average polynomial time

    Full text link
    We exhibit a condition-based analysis of the adaptive subdivision algorithm due to Plantinga and Vegter. The first complexity analysis of the PV Algorithm is due to Burr, Gao and Tsigaridas who proved a O(2τd4logd)O\big(2^{\tau d^{4}\log d}\big) worst-case cost bound for degree dd plane curves with maximum coefficient bit-size τ\tau. This exponential bound, it was observed, is in stark contrast with the good performance of the algorithm in practice. More in line with this performance, we show that, with respect to a broad family of measures, the expected time complexity of the PV Algorithm is bounded by O(d7)O(d^7) for real, degree dd, plane curves. We also exhibit a smoothed analysis of the PV Algorithm that yields similar complexity estimates. To obtain these results we combine robust probabilistic techniques coming from geometric functional analysis with condition numbers and the continuous amortization paradigm introduced by Burr, Krahmer and Yap. We hope this will motivate a fruitful exchange of ideas between the different approaches to numerical computation.Comment: 8 pages, correction of typo

    A Polynomial Time Algorithm for Lossy Population Recovery

    Full text link
    We give a polynomial time algorithm for the lossy population recovery problem. In this problem, the goal is to approximately learn an unknown distribution on binary strings of length nn from lossy samples: for some parameter μ\mu each coordinate of the sample is preserved with probability μ\mu and otherwise is replaced by a `?'. The running time and number of samples needed for our algorithm is polynomial in nn and 1/ε1/\varepsilon for each fixed μ>0\mu>0. This improves on algorithm of Wigderson and Yehudayoff that runs in quasi-polynomial time for any μ>0\mu > 0 and the polynomial time algorithm of Dvir et al which was shown to work for μ0.30\mu \gtrapprox 0.30 by Batman et al. In fact, our algorithm also works in the more general framework of Batman et al. in which there is no a priori bound on the size of the support of the distribution. The algorithm we analyze is implicit in previous work; our main contribution is to analyze the algorithm by showing (via linear programming duality and connections to complex analysis) that a certain matrix associated with the problem has a robust local inverse even though its condition number is exponentially small. A corollary of our result is the first polynomial time algorithm for learning DNFs in the restriction access model of Dvir et al

    Ramanujan Graphs in Polynomial Time

    Full text link
    The recent work by Marcus, Spielman and Srivastava proves the existence of bipartite Ramanujan (multi)graphs of all degrees and all sizes. However, that paper did not provide a polynomial time algorithm to actually compute such graphs. Here, we provide a polynomial time algorithm to compute certain expected characteristic polynomials related to this construction. This leads to a deterministic polynomial time algorithm to compute bipartite Ramanujan (multi)graphs of all degrees and all sizes

    A Polynomial-time Algorithm for Outerplanar Diameter Improvement

    Full text link
    The Outerplanar Diameter Improvement problem asks, given a graph GG and an integer DD, whether it is possible to add edges to GG in a way that the resulting graph is outerplanar and has diameter at most DD. We provide a dynamic programming algorithm that solves this problem in polynomial time. Outerplanar Diameter Improvement demonstrates several structural analogues to the celebrated and challenging Planar Diameter Improvement problem, where the resulting graph should, instead, be planar. The complexity status of this latter problem is open.Comment: 24 page

    On Integer Programming, Discrepancy, and Convolution

    Full text link
    Integer programs with a constant number of constraints are solvable in pseudo-polynomial time. We give a new algorithm with a better pseudo-polynomial running time than previous results. Moreover, we establish a strong connection to the problem (min, +)-convolution. (min, +)-convolution has a trivial quadratic time algorithm and it has been conjectured that this cannot be improved significantly. We show that further improvements to our pseudo-polynomial algorithm for any fixed number of constraints are equivalent to improvements for (min, +)-convolution. This is a strong evidence that our algorithm's running time is the best possible. We also present a faster specialized algorithm for testing feasibility of an integer program with few constraints and for this we also give a tight lower bound, which is based on the SETH.Comment: A preliminary version appeared in the proceedings of ITCS 201
    corecore