420,830 research outputs found
Parametric Polynomial Time Perceptron Rescaling Algorithm
Let us consider a linear feasibility problem with a possibly infinite number of inequality constraints posed in an on-line setting: an algorithm suggests a candidate solution, and the oracle either confirms its feasibility, or outputs a violated constraint vector. This model can be solved by subgradient optimisation algorithms for non-smooth functions, also known as the perceptron algorithms in the machine learning community, and its solvability depends on the problem dimension and the radius of the constraint set. The classical perceptron algorithm may have an exponential complexity in the worst case when the radius is infinitesimal [1]. To overcome this difficulty, the space dilation technique was exploited in the ellipsoid algorithm to make its running time polynomial [3]. A special case of the space dilation, the rescaling procedure is utilised in the perceptron rescaling algorithm [2] with a probabilistic approach to choosing the direction of dilation. A parametric version of the perceptron rescaling algorithm is the focus of this work. It is demonstrated that some fixed parameters of the latter algorithm (the initial estimate of the radius and the relaxation parameter) may be modified and adapted for particular problems. The generalised theoretical framework allows to determine convergence of the algorithm with any chosen set of values of these parameters, and suggests a potential way of decreasing the complexity of the algorithm which remains the subject of current research
A Combinatorial, Strongly Polynomial-Time Algorithm for Minimizing Submodular Functions
This paper presents the first combinatorial polynomial-time algorithm for
minimizing submodular set functions, answering an open question posed in 1981
by Grotschel, Lovasz, and Schrijver. The algorithm employs a scaling scheme
that uses a flow in the complete directed graph on the underlying set with each
arc capacity equal to the scaled parameter. The resulting algorithm runs in
time bounded by a polynomial in the size of the underlying set and the largest
length of the function value. The paper also presents a strongly
polynomial-time version that runs in time bounded by a polynomial in the size
of the underlying set independent of the function value.Comment: 17 page
Plantinga-Vegter algorithm takes average polynomial time
We exhibit a condition-based analysis of the adaptive subdivision algorithm
due to Plantinga and Vegter. The first complexity analysis of the PV Algorithm
is due to Burr, Gao and Tsigaridas who proved a worst-case cost bound for degree plane curves with maximum
coefficient bit-size . This exponential bound, it was observed, is in
stark contrast with the good performance of the algorithm in practice. More in
line with this performance, we show that, with respect to a broad family of
measures, the expected time complexity of the PV Algorithm is bounded by
for real, degree , plane curves. We also exhibit a smoothed
analysis of the PV Algorithm that yields similar complexity estimates. To
obtain these results we combine robust probabilistic techniques coming from
geometric functional analysis with condition numbers and the continuous
amortization paradigm introduced by Burr, Krahmer and Yap. We hope this will
motivate a fruitful exchange of ideas between the different approaches to
numerical computation.Comment: 8 pages, correction of typo
A Polynomial Time Algorithm for Lossy Population Recovery
We give a polynomial time algorithm for the lossy population recovery
problem. In this problem, the goal is to approximately learn an unknown
distribution on binary strings of length from lossy samples: for some
parameter each coordinate of the sample is preserved with probability
and otherwise is replaced by a `?'. The running time and number of
samples needed for our algorithm is polynomial in and for
each fixed . This improves on algorithm of Wigderson and Yehudayoff that
runs in quasi-polynomial time for any and the polynomial time
algorithm of Dvir et al which was shown to work for by
Batman et al. In fact, our algorithm also works in the more general framework
of Batman et al. in which there is no a priori bound on the size of the support
of the distribution. The algorithm we analyze is implicit in previous work; our
main contribution is to analyze the algorithm by showing (via linear
programming duality and connections to complex analysis) that a certain matrix
associated with the problem has a robust local inverse even though its
condition number is exponentially small. A corollary of our result is the first
polynomial time algorithm for learning DNFs in the restriction access model of
Dvir et al
Ramanujan Graphs in Polynomial Time
The recent work by Marcus, Spielman and Srivastava proves the existence of
bipartite Ramanujan (multi)graphs of all degrees and all sizes. However, that
paper did not provide a polynomial time algorithm to actually compute such
graphs. Here, we provide a polynomial time algorithm to compute certain
expected characteristic polynomials related to this construction. This leads to
a deterministic polynomial time algorithm to compute bipartite Ramanujan
(multi)graphs of all degrees and all sizes
A Polynomial-time Algorithm for Outerplanar Diameter Improvement
The Outerplanar Diameter Improvement problem asks, given a graph and an
integer , whether it is possible to add edges to in a way that the
resulting graph is outerplanar and has diameter at most . We provide a
dynamic programming algorithm that solves this problem in polynomial time.
Outerplanar Diameter Improvement demonstrates several structural analogues to
the celebrated and challenging Planar Diameter Improvement problem, where the
resulting graph should, instead, be planar. The complexity status of this
latter problem is open.Comment: 24 page
On Integer Programming, Discrepancy, and Convolution
Integer programs with a constant number of constraints are solvable in
pseudo-polynomial time. We give a new algorithm with a better pseudo-polynomial
running time than previous results. Moreover, we establish a strong connection
to the problem (min, +)-convolution. (min, +)-convolution has a trivial
quadratic time algorithm and it has been conjectured that this cannot be
improved significantly. We show that further improvements to our
pseudo-polynomial algorithm for any fixed number of constraints are equivalent
to improvements for (min, +)-convolution. This is a strong evidence that our
algorithm's running time is the best possible. We also present a faster
specialized algorithm for testing feasibility of an integer program with few
constraints and for this we also give a tight lower bound, which is based on
the SETH.Comment: A preliminary version appeared in the proceedings of ITCS 201
- …
