1,832 research outputs found
Derandomized Construction of Combinatorial Batch Codes
Combinatorial Batch Codes (CBCs), replication-based variant of Batch Codes
introduced by Ishai et al. in STOC 2004, abstracts the following data
distribution problem: data items are to be replicated among servers in
such a way that any of the data items can be retrieved by reading at
most one item from each server with the total amount of storage over
servers restricted to . Given parameters and , where and
are constants, one of the challenging problems is to construct -uniform CBCs
(CBCs where each data item is replicated among exactly servers) which
maximizes the value of . In this work, we present explicit construction of
-uniform CBCs with data items. The
construction has the property that the servers are almost regular, i.e., number
of data items stored in each server is in the range . The
construction is obtained through better analysis and derandomization of the
randomized construction presented by Ishai et al. Analysis reveals almost
regularity of the servers, an aspect that so far has not been addressed in the
literature. The derandomization leads to explicit construction for a wide range
of values of (for given and ) where no other explicit construction
with similar parameters, i.e., with , is
known. Finally, we discuss possibility of parallel derandomization of the
construction
Deterministic parallel algorithms for bilinear objective functions
Many randomized algorithms can be derandomized efficiently using either the
method of conditional expectations or probability spaces with low independence.
A series of papers, beginning with work by Luby (1988), showed that in many
cases these techniques can be combined to give deterministic parallel (NC)
algorithms for a variety of combinatorial optimization problems, with low time-
and processor-complexity.
We extend and generalize a technique of Luby for efficiently handling
bilinear objective functions. One noteworthy application is an NC algorithm for
maximal independent set. On a graph with edges and vertices, this
takes time and processors, nearly
matching the best randomized parallel algorithms. Other applications include
reduced processor counts for algorithms of Berger (1997) for maximum acyclic
subgraph and Gale-Berlekamp switching games.
This bilinear factorization also gives better algorithms for problems
involving discrepancy. An important application of this is to automata-fooling
probability spaces, which are the basis of a notable derandomization technique
of Sivakumar (2002). Our method leads to large reduction in processor
complexity for a number of derandomization algorithms based on
automata-fooling, including set discrepancy and the Johnson-Lindenstrauss
Lemma
Quantified Derandomization of Linear Threshold Circuits
One of the prominent current challenges in complexity theory is the attempt
to prove lower bounds for , the class of constant-depth, polynomial-size
circuits with majority gates. Relying on the results of Williams (2013), an
appealing approach to prove such lower bounds is to construct a non-trivial
derandomization algorithm for . In this work we take a first step towards
the latter goal, by proving the first positive results regarding the
derandomization of circuits of depth .
Our first main result is a quantified derandomization algorithm for
circuits with a super-linear number of wires. Specifically, we construct an
algorithm that gets as input a circuit over input bits with
depth and wires, runs in almost-polynomial-time, and
distinguishes between the case that rejects at most inputs
and the case that accepts at most inputs. In fact, our
algorithm works even when the circuit is a linear threshold circuit, rather
than just a circuit (i.e., is a circuit with linear threshold gates,
which are stronger than majority gates).
Our second main result is that even a modest improvement of our quantified
derandomization algorithm would yield a non-trivial algorithm for standard
derandomization of all of , and would consequently imply that
. Specifically, if there exists a quantified
derandomization algorithm that gets as input a circuit with depth
and wires (rather than wires), runs in time at
most , and distinguishes between the case that rejects at
most inputs and the case that accepts at most
inputs, then there exists an algorithm with running time
for standard derandomization of .Comment: Changes in this revision: An additional result (a PRG for quantified
derandomization of depth-2 LTF circuits); rewrite of some of the exposition;
minor correction
Space-Bounded Kolmogorov Extractors
An extractor is a function that receives some randomness and either
"improves" it or produces "new" randomness. There are statistical and
algorithmical specifications of this notion. We study an algorithmical one
called Kolmogorov extractors and modify it to resource-bounded version of
Kolmogorov complexity. Following Zimand we prove the existence of such objects
with certain parameters. The utilized technique is "naive" derandomization: we
replace random constructions employed by Zimand by pseudo-random ones obtained
by Nisan-Wigderson generator.Comment: 12 pages, accepted to CSR201
An Atypical Survey of Typical-Case Heuristic Algorithms
Heuristic approaches often do so well that they seem to pretty much always
give the right answer. How close can heuristic algorithms get to always giving
the right answer, without inducing seismic complexity-theoretic consequences?
This article first discusses how a series of results by Berman, Buhrman,
Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the
early 1970s through the early 1990s, explicitly or implicitly limited how well
heuristic algorithms can do on NP-hard problems. In particular, many desirable
levels of heuristic success cannot be obtained unless severe, highly unlikely
complexity class collapses occur. Second, we survey work initiated by Goldreich
and Wigderson, who showed how under plausible assumptions deterministic
heuristics for randomized computation can achieve a very high frequency of
correctness. Finally, we consider formal ways in which theory can help explain
the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012
issue of SIGACT New
Peak Power Reduction of OFDM Signals with Sign Adjustment
It has recently been shown that significant reduction in the peak to mean envelope power (PMEPR) can be obtained by altering the sign of each subcarrier in a multicarrier system with n subcarriers. However, finding the best sign not only requires a search over 2n possible signs but also may lead to a substantial rate loss for small size constellations. In this paper, we first propose a greedy algorithm to choose the signs based on p-norm minimization and prove that the resulting PMEPR is guaranteed to be less than c log n where c is a constant independent of n for any n. This approach has lower complexity in each iteration compared to the derandomization approach of while achieving similar PMEPR reduction. We further improve the performance of the proposed algorithm by enlarging the search space using pruning. Simulation results show that PMEPR of a multicarrier signal with 128 subcarriers can be reduced to within 1.6 dB of the PMEPR of a single carrier system. In the second part of the paper, we address the rate loss by proposing a block coding scheme in which only one sign vector is chosen for K different modulating vectors. The sign vector can be computed using the greedy algorithm in n iterations. We show that the multi-symbol encoding approach can reduce the rate loss by a factor of K while achieving the PMEPR of c logKn, i.e., only logarithmic growth in K. Simulation results show that the rate loss can be made smaller than %10 at the cost of only 1 db increase in the resulting PMEPR for a system with 128 subcarriers
- …
