116,301 research outputs found
The Complexity of Optimizing over a Simplex, Hypercube or Sphere: A Short Survey
We consider the computational complexity of optimizing various classes of continuous functions over a simplex, hypercube or sphere.These relatively simple optimization problems have many applications.We review known approximation results as well as negative (inapproximability) results from the recent literature.computational complexity;global optimization;linear and semidefinite programming;approximation algorithms
The World of Combinatorial Fuzzy Problems and the Efficiency of Fuzzy Approximation Algorithms
We re-examine a practical aspect of combinatorial fuzzy problems of various
types, including search, counting, optimization, and decision problems. We are
focused only on those fuzzy problems that take series of fuzzy input objects
and produce fuzzy values. To solve such problems efficiently, we design fast
fuzzy algorithms, which are modeled by polynomial-time deterministic fuzzy
Turing machines equipped with read-only auxiliary tapes and write-only output
tapes and also modeled by polynomial-size fuzzy circuits composed of fuzzy
gates. We also introduce fuzzy proof verification systems to model the
fuzzification of nondeterminism. Those models help us identify four complexity
classes: Fuzzy-FPA of fuzzy functions, Fuzzy-PA and Fuzzy-NPA of fuzzy decision
problems, and Fuzzy-NPAO of fuzzy optimization problems. Based on a relative
approximation scheme targeting fuzzy membership degree, we formulate two
notions of "reducibility" in order to compare the computational complexity of
two fuzzy problems. These reducibility notions make it possible to locate the
most difficult fuzzy problems in Fuzzy-NPA and in Fuzzy-NPAO.Comment: A4, 10pt, 10 pages. This extended abstract already appeared in the
Proceedings of the Joint 7th International Conference on Soft Computing and
Intelligent Systems (SCIS 2014) and 15th International Symposium on Advanced
Intelligent Systems (ISIS 2014), December 3-6, 2014, Institute of Electrical
and Electronics Engineers (IEEE), pp. 29-35, 201
Rounding Methods for Discrete Linear Classification (Extended Version)
Learning discrete linear classifiers is known as a difficult challenge. In this paper, this learning task is cast as combinatorial optimization problem: given a training sample formed by positive and negative feature vectors in the Euclidean space, the goal is to find a discrete linear function that minimizes the cumulative hinge loss of the sample. Since this problem is NP-hard, we examine two simple rounding algorithms that discretize the fractional solution of the problem. Generalization bounds are derived for several classes of binary-weighted linear functions, by analyzing the Rademacher complexity of these classes and by establishing approximation bounds for our rounding algorithms. Our methods are evaluated on both synthetic and real-world data
Complexity of Discrete Energy Minimization Problems
Discrete energy minimization is widely-used in computer vision and machine
learning for problems such as MAP inference in graphical models. The problem,
in general, is notoriously intractable, and finding the global optimal solution
is known to be NP-hard. However, is it possible to approximate this problem
with a reasonable ratio bound on the solution quality in polynomial time? We
show in this paper that the answer is no. Specifically, we show that general
energy minimization, even in the 2-label pairwise case, and planar energy
minimization with three or more labels are exp-APX-complete. This finding rules
out the existence of any approximation algorithm with a sub-exponential
approximation ratio in the input size for these two problems, including
constant factor approximations. Moreover, we collect and review the
computational complexity of several subclass problems and arrange them on a
complexity scale consisting of three major complexity classes -- PO, APX, and
exp-APX, corresponding to problems that are solvable, approximable, and
inapproximable in polynomial time. Problems in the first two complexity classes
can serve as alternative tractable formulations to the inapproximable ones.
This paper can help vision researchers to select an appropriate model for an
application or guide them in designing new algorithms.Comment: ECCV'16 accepte
Inapproximability of Combinatorial Optimization Problems
We survey results on the hardness of approximating combinatorial optimization
problems
Multi-view Metric Learning in Vector-valued Kernel Spaces
We consider the problem of metric learning for multi-view data and present a
novel method for learning within-view as well as between-view metrics in
vector-valued kernel spaces, as a way to capture multi-modal structure of the
data. We formulate two convex optimization problems to jointly learn the metric
and the classifier or regressor in kernel feature spaces. An iterative
three-step multi-view metric learning algorithm is derived from the
optimization problems. In order to scale the computation to large training
sets, a block-wise Nystr{\"o}m approximation of the multi-view kernel matrix is
introduced. We justify our approach theoretically and experimentally, and show
its performance on real-world datasets against relevant state-of-the-art
methods
- …