12,161 research outputs found
Interior Point Decoding for Linear Vector Channels
In this paper, a novel decoding algorithm for low-density parity-check (LDPC)
codes based on convex optimization is presented. The decoding algorithm, called
interior point decoding, is designed for linear vector channels. The linear
vector channels include many practically important channels such as inter
symbol interference channels and partial response channels. It is shown that
the maximum likelihood decoding (MLD) rule for a linear vector channel can be
relaxed to a convex optimization problem, which is called a relaxed MLD
problem. The proposed decoding algorithm is based on a numerical optimization
technique so called interior point method with barrier function. Approximate
variations of the gradient descent and the Newton methods are used to solve the
convex optimization problem. In a decoding process of the proposed algorithm, a
search point always lies in the fundamental polytope defined based on a
low-density parity-check matrix. Compared with a convectional joint message
passing decoder, the proposed decoding algorithm achieves better BER
performance with less complexity in the case of partial response channels in
many cases.Comment: 18 pages, 17 figures, The paper has been submitted to IEEE
Transaction on Information Theor
Average case polyhedral complexity of the maximum stable set problem
We study the minimum number of constraints needed to formulate random
instances of the maximum stable set problem via linear programs (LPs), in two
distinct models. In the uniform model, the constraints of the LP are not
allowed to depend on the input graph, which should be encoded solely in the
objective function. There we prove a lower bound with
probability at least for every LP that is exact for a randomly
selected set of instances; each graph on at most n vertices being selected
independently with probability . In the
non-uniform model, the constraints of the LP may depend on the input graph, but
we allow weights on the vertices. The input graph is sampled according to the
G(n, p) model. There we obtain upper and lower bounds holding with high
probability for various ranges of p. We obtain a super-polynomial lower bound
all the way from to . Our upper bound is close to this as there is only an essentially quadratic
gap in the exponent, which currently also exists in the worst-case model.
Finally, we state a conjecture that would close this gap, both in the
average-case and worst-case models
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (âefficientâ) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find âquicklyâ (reasonable run-times), with âhighâ probability, provable âgoodâ solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Approximation Limits of Linear Programs (Beyond Hierarchies)
We develop a framework for approximation limits of polynomial-size linear
programs from lower bounds on the nonnegative ranks of suitably defined
matrices. This framework yields unconditional impossibility results that are
applicable to any linear program as opposed to only programs generated by
hierarchies. Using our framework, we prove that O(n^{1/2-eps})-approximations
for CLIQUE require linear programs of size 2^{n^\Omega(eps)}. (This lower bound
applies to linear programs using a certain encoding of CLIQUE as a linear
optimization problem.) Moreover, we establish a similar result for
approximations of semidefinite programs by linear programs. Our main ingredient
is a quantitative improvement of Razborov's rectangle corruption lemma for the
high error regime, which gives strong lower bounds on the nonnegative rank of
certain perturbations of the unique disjointness matrix.Comment: 23 pages, 2 figure
A New Multilayered PCP and the Hardness of Hypergraph Vertex Cover
Given a -uniform hyper-graph, the E-Vertex-Cover problem is to find the
smallest subset of vertices that intersects every hyper-edge. We present a new
multilayered PCP construction that extends the Raz verifier. This enables us to
prove that E-Vertex-Cover is NP-hard to approximate within factor
for any and any . The result is
essentially tight as this problem can be easily approximated within factor .
Our construction makes use of the biased Long-Code and is analyzed using
combinatorial properties of -wise -intersecting families of subsets
Algorithms as Mechanisms: The Price of Anarchy of Relax-and-Round
Many algorithms that are originally designed without explicitly considering
incentive properties are later combined with simple pricing rules and used as
mechanisms. The resulting mechanisms are often natural and simple to
understand. But how good are these algorithms as mechanisms? Truthful reporting
of valuations is typically not a dominant strategy (certainly not with a
pay-your-bid, first-price rule, but it is likely not a good strategy even with
a critical value, or second-price style rule either). Our goal is to show that
a wide class of approximation algorithms yields this way mechanisms with low
Price of Anarchy.
The seminal result of Lucier and Borodin [SODA 2010] shows that combining a
greedy algorithm that is an -approximation algorithm with a
pay-your-bid payment rule yields a mechanism whose Price of Anarchy is
. In this paper we significantly extend the class of algorithms for
which such a result is available by showing that this close connection between
approximation ratio on the one hand and Price of Anarchy on the other also
holds for the design principle of relaxation and rounding provided that the
relaxation is smooth and the rounding is oblivious.
We demonstrate the far-reaching consequences of our result by showing its
implications for sparse packing integer programs, such as multi-unit auctions
and generalized matching, for the maximum traveling salesman problem, for
combinatorial auctions, and for single source unsplittable flow problems. In
all these problems our approach leads to novel simple, near-optimal mechanisms
whose Price of Anarchy either matches or beats the performance guarantees of
known mechanisms.Comment: Extended abstract appeared in Proc. of 16th ACM Conference on
Economics and Computation (EC'15
- âŠ