1,356,533 research outputs found
Quantum Optimization Problems
Krentel [J. Comput. System. Sci., 36, pp.490--509] presented a framework for
an NP optimization problem that searches an optimal value among
exponentially-many outcomes of polynomial-time computations. This paper expands
his framework to a quantum optimization problem using polynomial-time quantum
computations and introduces the notion of an ``universal'' quantum optimization
problem similar to a classical ``complete'' optimization problem. We exhibit a
canonical quantum optimization problem that is universal for the class of
polynomial-time quantum optimization problems. We show in a certain relativized
world that all quantum optimization problems cannot be approximated closely by
quantum polynomial-time computations. We also study the complexity of quantum
optimization problems in connection to well-known complexity classes.Comment: date change
Spectral Optimization Problems
In this survey paper we present a class of shape optimization problems where
the cost function involves the solution of a PDE of elliptic type in the
unknown domain. In particular, we consider cost functions which depend on the
spectrum of an elliptic operator and we focus on the existence of an optimal
domain. The known results are presented as well as a list of still open
problems. Related fields as optimal partition problems, evolution flows,
Cheeger-type problems, are also considered.Comment: 42 pages with 8 figure
Set optimization - a rather short introduction
Recent developments in set optimization are surveyed and extended including
various set relations as well as fundamental constructions of a convex analysis
for set- and vector-valued functions, and duality for set optimization
problems. Extensive sections with bibliographical comments summarize the state
of the art. Applications to vector optimization and financial risk measures are
discussed along with algorithmic approaches to set optimization problems
Application of reduced-set pareto-lipschitzian optimization to truss optimization
In this paper, a recently proposed global Lipschitz optimization algorithm Pareto-Lipschitzian Optimization with Reduced-set (PLOR) is further developed, investigated and applied to truss optimization problems. Partition patterns of the PLOR algorithm are similar to those of DIviding RECTangles (DIRECT), which was widely applied to different real-life problems. However here a set of all Lipschitz constants is reduced to just two: the maximal and the minimal ones. In such a way the PLOR approach is independent of any user-defined parameters and balances equally local and global search during the optimization process. An expanded list of other well-known DIRECT-type algorithms is used in investigation and experimental comparison using the standard test problems and truss optimization problems. The experimental investigation shows that the PLOR algorithm gives very competitive results to other DIRECT-type algorithms using standard test problems and performs pretty well on real truss optimization problems
Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
In this paper, we present a new stochastic algorithm, namely the stochastic
block mirror descent (SBMD) method for solving large-scale nonsmooth and
stochastic optimization problems. The basic idea of this algorithm is to
incorporate the block-coordinate decomposition and an incremental block
averaging scheme into the classic (stochastic) mirror-descent method, in order
to significantly reduce the cost per iteration of the latter algorithm. We
establish the rate of convergence of the SBMD method along with its associated
large-deviation results for solving general nonsmooth and stochastic
optimization problems. We also introduce different variants of this method and
establish their rate of convergence for solving strongly convex, smooth, and
composite optimization problems, as well as certain nonconvex optimization
problems. To the best of our knowledge, all these developments related to the
SBMD methods are new in the stochastic optimization literature. Moreover, some
of our results also seem to be new for block coordinate descent methods for
deterministic optimization
Hardness Amplification of Optimization Problems
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products.
We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows:
If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27.
As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium
Inapproximability of Combinatorial Optimization Problems
We survey results on the hardness of approximating combinatorial optimization
problems
- …
