1,832 research outputs found

    Derandomized Construction of Combinatorial Batch Codes

    Full text link
    Combinatorial Batch Codes (CBCs), replication-based variant of Batch Codes introduced by Ishai et al. in STOC 2004, abstracts the following data distribution problem: nn data items are to be replicated among mm servers in such a way that any kk of the nn data items can be retrieved by reading at most one item from each server with the total amount of storage over mm servers restricted to NN. Given parameters m,c,m, c, and kk, where cc and kk are constants, one of the challenging problems is to construct cc-uniform CBCs (CBCs where each data item is replicated among exactly cc servers) which maximizes the value of nn. In this work, we present explicit construction of cc-uniform CBCs with Ω(mc1+1k)\Omega(m^{c-1+{1 \over k}}) data items. The construction has the property that the servers are almost regular, i.e., number of data items stored in each server is in the range [ncmn2ln(4m),ncm+n2ln(4m)][{nc \over m}-\sqrt{{n\over 2}\ln (4m)}, {nc \over m}+\sqrt{{n \over 2}\ln (4m)}]. The construction is obtained through better analysis and derandomization of the randomized construction presented by Ishai et al. Analysis reveals almost regularity of the servers, an aspect that so far has not been addressed in the literature. The derandomization leads to explicit construction for a wide range of values of cc (for given mm and kk) where no other explicit construction with similar parameters, i.e., with n=Ω(mc1+1k)n = \Omega(m^{c-1+{1 \over k}}), is known. Finally, we discuss possibility of parallel derandomization of the construction

    Deterministic parallel algorithms for bilinear objective functions

    Full text link
    Many randomized algorithms can be derandomized efficiently using either the method of conditional expectations or probability spaces with low independence. A series of papers, beginning with work by Luby (1988), showed that in many cases these techniques can be combined to give deterministic parallel (NC) algorithms for a variety of combinatorial optimization problems, with low time- and processor-complexity. We extend and generalize a technique of Luby for efficiently handling bilinear objective functions. One noteworthy application is an NC algorithm for maximal independent set. On a graph GG with mm edges and nn vertices, this takes O~(log2n)\tilde O(\log^2 n) time and (m+n)no(1)(m + n) n^{o(1)} processors, nearly matching the best randomized parallel algorithms. Other applications include reduced processor counts for algorithms of Berger (1997) for maximum acyclic subgraph and Gale-Berlekamp switching games. This bilinear factorization also gives better algorithms for problems involving discrepancy. An important application of this is to automata-fooling probability spaces, which are the basis of a notable derandomization technique of Sivakumar (2002). Our method leads to large reduction in processor complexity for a number of derandomization algorithms based on automata-fooling, including set discrepancy and the Johnson-Lindenstrauss Lemma

    Quantified Derandomization of Linear Threshold Circuits

    Full text link
    One of the prominent current challenges in complexity theory is the attempt to prove lower bounds for TC0TC^0, the class of constant-depth, polynomial-size circuits with majority gates. Relying on the results of Williams (2013), an appealing approach to prove such lower bounds is to construct a non-trivial derandomization algorithm for TC0TC^0. In this work we take a first step towards the latter goal, by proving the first positive results regarding the derandomization of TC0TC^0 circuits of depth d>2d>2. Our first main result is a quantified derandomization algorithm for TC0TC^0 circuits with a super-linear number of wires. Specifically, we construct an algorithm that gets as input a TC0TC^0 circuit CC over nn input bits with depth dd and n1+exp(d)n^{1+\exp(-d)} wires, runs in almost-polynomial-time, and distinguishes between the case that CC rejects at most 2n11/5d2^{n^{1-1/5d}} inputs and the case that CC accepts at most 2n11/5d2^{n^{1-1/5d}} inputs. In fact, our algorithm works even when the circuit CC is a linear threshold circuit, rather than just a TC0TC^0 circuit (i.e., CC is a circuit with linear threshold gates, which are stronger than majority gates). Our second main result is that even a modest improvement of our quantified derandomization algorithm would yield a non-trivial algorithm for standard derandomization of all of TC0TC^0, and would consequently imply that NEXP⊈TC0NEXP\not\subseteq TC^0. Specifically, if there exists a quantified derandomization algorithm that gets as input a TC0TC^0 circuit with depth dd and n1+O(1/d)n^{1+O(1/d)} wires (rather than n1+exp(d)n^{1+\exp(-d)} wires), runs in time at most 2nexp(d)2^{n^{\exp(-d)}}, and distinguishes between the case that CC rejects at most 2n11/5d2^{n^{1-1/5d}} inputs and the case that CC accepts at most 2n11/5d2^{n^{1-1/5d}} inputs, then there exists an algorithm with running time 2n1Ω(1)2^{n^{1-\Omega(1)}} for standard derandomization of TC0TC^0.Comment: Changes in this revision: An additional result (a PRG for quantified derandomization of depth-2 LTF circuits); rewrite of some of the exposition; minor correction

    Space-Bounded Kolmogorov Extractors

    Full text link
    An extractor is a function that receives some randomness and either "improves" it or produces "new" randomness. There are statistical and algorithmical specifications of this notion. We study an algorithmical one called Kolmogorov extractors and modify it to resource-bounded version of Kolmogorov complexity. Following Zimand we prove the existence of such objects with certain parameters. The utilized technique is "naive" derandomization: we replace random constructions employed by Zimand by pseudo-random ones obtained by Nisan-Wigderson generator.Comment: 12 pages, accepted to CSR201

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    Peak Power Reduction of OFDM Signals with Sign Adjustment

    Get PDF
    It has recently been shown that significant reduction in the peak to mean envelope power (PMEPR) can be obtained by altering the sign of each subcarrier in a multicarrier system with n subcarriers. However, finding the best sign not only requires a search over 2n possible signs but also may lead to a substantial rate loss for small size constellations. In this paper, we first propose a greedy algorithm to choose the signs based on p-norm minimization and prove that the resulting PMEPR is guaranteed to be less than c log n where c is a constant independent of n for any n. This approach has lower complexity in each iteration compared to the derandomization approach of while achieving similar PMEPR reduction. We further improve the performance of the proposed algorithm by enlarging the search space using pruning. Simulation results show that PMEPR of a multicarrier signal with 128 subcarriers can be reduced to within 1.6 dB of the PMEPR of a single carrier system. In the second part of the paper, we address the rate loss by proposing a block coding scheme in which only one sign vector is chosen for K different modulating vectors. The sign vector can be computed using the greedy algorithm in n iterations. We show that the multi-symbol encoding approach can reduce the rate loss by a factor of K while achieving the PMEPR of c logKn, i.e., only logarithmic growth in K. Simulation results show that the rate loss can be made smaller than %10 at the cost of only 1 db increase in the resulting PMEPR for a system with 128 subcarriers
    corecore