1,585 research outputs found

    Algorithms for Testing Monomials in Multivariate Polynomials

    Full text link
    This paper is our second step towards developing a theory of testing monomials in multivariate polynomials. The central question is to ask whether a polynomial represented by an arithmetic circuit has some types of monomials in its sum-product expansion. The complexity aspects of this problem and its variants have been investigated in our first paper by Chen and Fu (2010), laying a foundation for further study. In this paper, we present two pairs of algorithms. First, we prove that there is a randomized O(pk)O^*(p^k) time algorithm for testing pp-monomials in an nn-variate polynomial of degree kk represented by an arithmetic circuit, while a deterministic O(6.4k+pk)O^*(6.4^k + p^k) time algorithm is devised when the circuit is a formula, here pp is a given prime number. Second, we present a deterministic O(2k)O^*(2^k) time algorithm for testing multilinear monomials in ΠmΣ2Πt×ΠkΠ3\Pi_m\Sigma_2\Pi_t\times \Pi_k\Pi_3 polynomials, while a randomized O(1.5k)O^*(1.5^k) algorithm is given for these polynomials. The first algorithm extends the recent work by Koutis (2008) and Williams (2009) on testing multilinear monomials. Group algebra is exploited in the algorithm designs, in corporation with the randomized polynomial identity testing over a finite field by Agrawal and Biswas (2003), the deterministic noncommunicative polynomial identity testing by Raz and Shpilka (2005) and the perfect hashing functions by Chen {\em at el.} (2007). Finally, we prove that testing some special types of multilinear monomial is W[1]-hard, giving evidence that testing for specific monomials is not fixed-parameter tractable

    Uses of randomness in computation

    Full text link
    Random number generators are widely used in practical algorithms. Examples include simulation, number theory (primality testing and integer factorization), fault tolerance, routing, cryptography, optimization by simulated annealing, and perfect hashing. Complexity theory usually considers the worst-case behaviour of deterministic algorithms, but it can also consider average-case behaviour if it is assumed that the input data is drawn randomly from a given distribution. Rabin popularised the idea of "probabilistic" algorithms, where randomness is incorporated into the algorithm instead of being assumed in the input data. Yao showed that there is a close connection between the complexity of probabilistic algorithms and the average-case complexity of deterministic algorithms. We give examples of the uses of randomness in computation, discuss the contributions of Rabin, Yao and others, and mention some open questions. This is the text of an invited talk presented at "Theory Day", University of NSW, Sydney, 22 April 1994.Comment: An old Technical Report, not published elsewhere. 14 pages. For further details see http://wwwmaths.anu.edu.au/~brent/pub/pub147.htm

    Optimized Entanglement Purification

    Get PDF
    We investigate novel protocols for entanglement purification of qubit Bell pairs. Employing genetic algorithms for the design of the purification circuit, we obtain shorter circuits achieving higher success rates and better final fidelities than what is currently available in the literature. We provide a software tool for analytical and numerical study of the generated purification circuits, under customizable error models. These new purification protocols pave the way to practical implementations of modular quantum computers and quantum repeaters. Our approach is particularly attentive to the effects of finite resources and imperfect local operations - phenomena neglected in the usual asymptotic approach to the problem. The choice of the building blocks permitted in the construction of the circuits is based on a thorough enumeration of the local Clifford operations that act as permutations on the basis of Bell states

    Pseudorandomness for Multilinear Read-Once Algebraic Branching Programs, in any Order

    Full text link
    We give deterministic black-box polynomial identity testing algorithms for multilinear read-once oblivious algebraic branching programs (ROABPs), in n^(lg^2 n) time. Further, our algorithm is oblivious to the order of the variables. This is the first sub-exponential time algorithm for this model. Furthermore, our result has no known analogue in the model of read-once oblivious boolean branching programs with unknown order, as despite recent work there is no known pseudorandom generator for this model with sub-polynomial seed-length (for unbounded-width branching programs). This result extends and generalizes the result of Forbes and Shpilka that obtained a n^(lg n)-time algorithm when given the order. We also extend and strengthen the work of Agrawal, Saha and Saxena that gave a black-box algorithm running in time exp((lg n)^d) for set-multilinear formulas of depth d. We note that the model of multilinear ROABPs contains the model of set-multilinear algebraic branching programs, which itself contains the model of set-multilinear formulas of arbitrary depth. We obtain our results by recasting, and improving upon, the ideas of Agrawal, Saha and Saxena. We phrase the ideas in terms of rank condensers and Wronskians, and show that our results improve upon the classical multivariate Wronskian, which may be of independent interest. In addition, we give the first n^(lglg n) black-box polynomial identity testing algorithm for the so called model of diagonal circuits. This model, introduced by Saxena has recently found applications in the work of Mulmuley, as well as in the work of Gupta, Kamath, Kayal, Saptharishi. Previously work had given n^(lg n)-time algorithms for this class. More generally, our result holds for any model computing polynomials whose partial derivatives (of all orders) span a low dimensional linear space.Comment: 38 page

    Essentially optimal interactive certificates in linear algebra

    Full text link
    Certificates to a linear algebra computation are additional data structures for each output, which can be used by a---possibly randomized---verification algorithm that proves the correctness of each output. The certificates are essentially optimal if the time (and space) complexity of verification is essentially linear in the input size NN, meaning NN times a factor No(1)N^{o(1)}, i.e., a factor Nη(N)N^{\eta(N)} with lim_Nη(N)\lim\_{N\to \infty} \eta(N) == 00. We give algorithms that compute essentially optimal certificates for the positive semidefiniteness, Frobenius form, characteristic and minimal polynomial of an n×nn\times n dense integer matrix AA. Our certificates can be verified in Monte-Carlo bit complexity (n2logA)1+o(1)(n^2 \log\|A\|)^{1+o(1)}, where logA\log\|A\| is the bit size of the integer entries, solving an open problem in [Kaltofen, Nehring, Saunders, Proc.\ ISSAC 2011] subject to computational hardness assumptions. Second, we give algorithms that compute certificates for the rank of sparse or structured n×nn\times n matrices over an abstract field, whose Monte Carlo verification complexity is 22 matrix-times-vector products ++ n1+o(1)n^{1+o(1)} arithmetic operations in the field. For example, if the n×nn\times n input matrix is sparse with n1+o(1)n^{1+o(1)} non-zero entries, our rank certificate can be verified in n1+o(1)n^{1+o(1)} field operations. This extends also to integer matrices with only an extra A1+o(1)\|A\|^{1+o(1)} factor. All our certificates are based on interactive verification protocols with the interaction removed by a Fiat-Shamir identification heuristic. The validity of our verification procedure is subject to standard computational hardness assumptions from cryptography

    Dynamic Ordered Sets with Exponential Search Trees

    Full text link
    We introduce exponential search trees as a novel technique for converting static polynomial space search structures for ordered sets into fully-dynamic linear space data structures. This leads to an optimal bound of O(sqrt(log n/loglog n)) for searching and updating a dynamic set of n integer keys in linear space. Here searching an integer y means finding the maximum key in the set which is smaller than or equal to y. This problem is equivalent to the standard text book problem of maintaining an ordered set (see, e.g., Cormen, Leiserson, Rivest, and Stein: Introduction to Algorithms, 2nd ed., MIT Press, 2001). The best previous deterministic linear space bound was O(log n/loglog n) due Fredman and Willard from STOC 1990. No better deterministic search bound was known using polynomial space. We also get the following worst-case linear space trade-offs between the number n, the word length w, and the maximal key U < 2^w: O(min{loglog n+log n/log w, (loglog n)(loglog U)/(logloglog U)}). These trade-offs are, however, not likely to be optimal. Our results are generalized to finger searching and string searching, providing optimal results for both in terms of n.Comment: Revision corrects some typoes and state things better for applications in subsequent paper

    Algebra in Computational Complexity

    Get PDF
    At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called "chasm at depth 4" suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model, and these are tied to central questions regarding the power of randomness in computation. Representation theory has emerged as an important tool in three separate lines of work: the "Geometric Complexity Theory" approach to P vs. NP and circuit lower bounds, the effort to resolve the complexity of matrix multiplication, and a framework for constructing locally testable codes. Coding theory has seen several algebraic innovations in recent years, including multiplicity codes, and new lower bounds. This seminar brought together researchers who are using a diverse array of algebraic methods in a variety of settings. It plays an important role in educating a diverse community about the latest new techniques, spurring further progress

    Non-Interactive Statistically-Hiding Quantum Bit Commitment from Any Quantum One-Way Function

    Full text link
    We provide a non-interactive quantum bit commitment scheme which has statistically-hiding and computationally-binding properties from any quantum one-way function. Our protocol is basically a parallel composition of the previous non-interactive quantum bit commitment schemes (based on quantum one-way permutations, due to Dumais, Mayers and Salvail (EUROCRYPT 2000)) with pairwise independent hash functions. To construct our non-interactive quantum bit commitment scheme from any quantum one-way function, we follow the procedure below: (i) from Dumais-Mayers-Salvail scheme to a weakly-hiding and 1-out-of-2 binding commitment (of a parallel variant); (ii) from the weakly-hiding and 1-out-of-2 binding commitment to a strongly-hiding and 1-out-of-2 binding commitment; (iii) from the strongly-hiding and 1-out-of-2 binding commitment to a normal statistically-hiding commitment. In the classical case, statistically-hiding bit commitment scheme (by Haitner, Nguyen, Ong, Reingold and Vadhan (SIAM J. Comput., Vol.39, 2009)) is also constructible from any one-way function. While the classical statistically-hiding bit commitment has large round complexity, our quantum scheme is non-interactive, which is advantageous over the classical schemes. A main technical contribution is to provide a quantum analogue of the new interactive hashing theorem, due to Haitner and Reingold (CCC 2007). Moreover, the parallel composition enables us to simplify the security analysis drastically

    Purifying GHZ States Using Degenerate Quantum Codes

    Full text link
    Degenerate quantum codes are codes that do not reveal the complete error syndrome. Their ability to conceal the complete error syndrome makes them powerful resources in certain quantum information processing tasks. In particular, the most error-tolerant way to purify depolarized Bell states using one-way communication known to date involves degenerate quantum codes. Here we study three closely related purification schemes for depolarized GHZ states shared among m3m \geq 3 players by means of degenerate quantum codes and one-way classical communications. We find that our schemes tolerate more noise than all other one-way schemes known to date, further demonstrating the effectiveness of degenerate quantum codes in quantum information processing.Comment: Significantly revised with a few new results added, 33 pages, 7 figure

    In-materio neuromimetic devices: Dynamics, information processing and pattern recognition

    Full text link
    The story of information processing is a story of great success. Todays' microprocessors are devices of unprecedented complexity and MOSFET transistors are considered as the most widely produced artifact in the history of mankind. The current miniaturization of electronic circuits is pushed almost to the physical limit and begins to suffer from various parasitic effects. These facts stimulate intense research on neuromimetic devices. This feature article is devoted to various in materio implementation of neuromimetic processes, including neuronal dynamics, synaptic plasticity, and higher-level signal and information processing, along with more sophisticated implementations, including signal processing, speech recognition and data security. Due to vast number of papers in the field, only a subjective selection of topics is presented in this review
    corecore