1,799 research outputs found
Complexity of Discrete Energy Minimization Problems
Discrete energy minimization is widely-used in computer vision and machine
learning for problems such as MAP inference in graphical models. The problem,
in general, is notoriously intractable, and finding the global optimal solution
is known to be NP-hard. However, is it possible to approximate this problem
with a reasonable ratio bound on the solution quality in polynomial time? We
show in this paper that the answer is no. Specifically, we show that general
energy minimization, even in the 2-label pairwise case, and planar energy
minimization with three or more labels are exp-APX-complete. This finding rules
out the existence of any approximation algorithm with a sub-exponential
approximation ratio in the input size for these two problems, including
constant factor approximations. Moreover, we collect and review the
computational complexity of several subclass problems and arrange them on a
complexity scale consisting of three major complexity classes -- PO, APX, and
exp-APX, corresponding to problems that are solvable, approximable, and
inapproximable in polynomial time. Problems in the first two complexity classes
can serve as alternative tractable formulations to the inapproximable ones.
This paper can help vision researchers to select an appropriate model for an
application or guide them in designing new algorithms.Comment: ECCV'16 accepte
Moment-Matching Polynomials
We give a new framework for proving the existence of low-degree, polynomial
approximators for Boolean functions with respect to broad classes of
non-product distributions. Our proofs use techniques related to the classical
moment problem and deviate significantly from known Fourier-based methods,
which require the underlying distribution to have some product structure.
Our main application is the first polynomial-time algorithm for agnostically
learning any function of a constant number of halfspaces with respect to any
log-concave distribution (for any constant accuracy parameter). This result was
not known even for the case of learning the intersection of two halfspaces
without noise. Additionally, we show that in the "smoothed-analysis" setting,
the above results hold with respect to distributions that have sub-exponential
tails, a property satisfied by many natural and well-studied distributions in
machine learning.
Given that our algorithms can be implemented using Support Vector Machines
(SVMs) with a polynomial kernel, these results give a rigorous theoretical
explanation as to why many kernel methods work so well in practice
Approximating Hereditary Discrepancy via Small Width Ellipsoids
The Discrepancy of a hypergraph is the minimum attainable value, over
two-colorings of its vertices, of the maximum absolute imbalance of any
hyperedge. The Hereditary Discrepancy of a hypergraph, defined as the maximum
discrepancy of a restriction of the hypergraph to a subset of its vertices, is
a measure of its complexity. Lovasz, Spencer and Vesztergombi (1986) related
the natural extension of this quantity to matrices to rounding algorithms for
linear programs, and gave a determinant based lower bound on the hereditary
discrepancy. Matousek (2011) showed that this bound is tight up to a
polylogarithmic factor, leaving open the question of actually computing this
bound. Recent work by Nikolov, Talwar and Zhang (2013) showed a polynomial time
-approximation to hereditary discrepancy, as a by-product
of their work in differential privacy. In this paper, we give a direct simple
-approximation algorithm for this problem. We show that up to
this approximation factor, the hereditary discrepancy of a matrix is
characterized by the optimal value of simple geometric convex program that
seeks to minimize the largest norm of any point in a ellipsoid
containing the columns of . This characterization promises to be a useful
tool in discrepancy theory
The complexity of approximating conservative counting CSPs
We study the complexity of approximately solving the weighted counting
constraint satisfaction problem #CSP(F). In the conservative case, where F
contains all unary functions, there is a classification known for the case in
which the domain of functions in F is Boolean. In this paper, we give a
classification for the more general problem where functions in F have an
arbitrary finite domain. We define the notions of weak log-modularity and weak
log-supermodularity. We show that if F is weakly log-modular, then #CSP(F)is in
FP. Otherwise, it is at least as difficult to approximate as #BIS, the problem
of counting independent sets in bipartite graphs. #BIS is complete with respect
to approximation-preserving reductions for a logically-defined complexity class
#RHPi1, and is believed to be intractable. We further sub-divide the #BIS-hard
case. If F is weakly log-supermodular, then we show that #CSP(F) is as easy as
a (Boolean) log-supermodular weighted #CSP. Otherwise, we show that it is
NP-hard to approximate. Finally, we give a full trichotomy for the arity-2
case, where #CSP(F) is in FP, or is #BIS-equivalent, or is equivalent in
difficulty to #SAT, the problem of approximately counting the satisfying
assignments of a Boolean formula in conjunctive normal form. We also discuss
the algorithmic aspects of our classification.Comment: Minor revisio
Algorithms for Approximate Minimization of the Difference Between Submodular Functions, with Applications
We extend the work of Narasimhan and Bilmes [30] for minimizing set functions
representable as a difference between submodular functions. Similar to [30],
our new algorithms are guaranteed to monotonically reduce the objective
function at every step. We empirically and theoretically show that the
per-iteration cost of our algorithms is much less than [30], and our algorithms
can be used to efficiently minimize a difference between submodular functions
under various combinatorial constraints, a problem not previously addressed. We
provide computational bounds and a hardness result on the mul- tiplicative
inapproximability of minimizing the difference between submodular functions. We
show, however, that it is possible to give worst-case additive bounds by
providing a polynomial time computable lower-bound on the minima. Finally we
show how a number of machine learning problems can be modeled as minimizing the
difference between submodular functions. We experimentally show the validity of
our algorithms by testing them on the problem of feature selection with
submodular cost features.Comment: 17 pages, 8 figures. A shorter version of this appeared in Proc.
Uncertainty in Artificial Intelligence (UAI), Catalina Islands, 201
Smoothed Complexity Theory
Smoothed analysis is a new way of analyzing algorithms introduced by Spielman
and Teng (J. ACM, 2004). Classical methods like worst-case or average-case
analysis have accompanying complexity classes, like P and AvgP, respectively.
While worst-case or average-case analysis give us a means to talk about the
running time of a particular algorithm, complexity classes allows us to talk
about the inherent difficulty of problems.
Smoothed analysis is a hybrid of worst-case and average-case analysis and
compensates some of their drawbacks. Despite its success for the analysis of
single algorithms and problems, there is no embedding of smoothed analysis into
computational complexity theory, which is necessary to classify problems
according to their intrinsic difficulty.
We propose a framework for smoothed complexity theory, define the relevant
classes, and prove some first hardness results (of bounded halting and tiling)
and tractability results (binary optimization problems, graph coloring,
satisfiability). Furthermore, we discuss extensions and shortcomings of our
model and relate it to semi-random models.Comment: to be presented at MFCS 201
- …