7,187 research outputs found

    Multiple Product Modulo Arbitrary Numbers

    Get PDF
    AbstractLetnbinary numbers of lengthnbe given. The Boolean function ā€œMultiple Productā€MPnasks for (some binary representation of ) the value of their product. It has been shown (K.-Y. Siu and V. Roychowdhury, On optimal depth threshold circuits for multiplication and related problems,SIAM J. Discrete Math.7, 285ā€“292 (1994)) that this function can be computed in polynomial-size threshold circuits of depth 4. For many other arithmetic functions, circuits of depth 3 are known. They are mostly based on the fact that the value of the considered function modulo some prime numbers p can be computed easily in threshold circuits of depth 2. In this paper, we investigate the complexity of computingMPnmodulomby depth-2 threshold circuits. It turns out that for all but a few integersm, exponential size is required. In particular, it is shown that formāˆˆ{2,Ā 4,Ā 8}, polynomial-size circuits exist, formāˆˆ{3,Ā 6,Ā 12,Ā 24}, the question remains open and in all other cases, exponential-size circuits are required. The result still holds if we allowmto grow withn

    Neural computation of arithmetic functions

    Get PDF
    A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n -bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions

    On the Complexity of Quantum ACC

    Get PDF
    For any q>1q > 1, let \MOD_q be a quantum gate that determines if the number of 1's in the input is divisible by qq. We show that for any q,t>1q,t > 1, \MOD_q is equivalent to \MOD_t (up to constant depth). Based on the case q=2q=2, Moore \cite{moore99} has shown that quantum analogs of AC(0)^{(0)}, ACC[q][q], and ACC, denoted QACwf(0)^{(0)}_{wf}, QACC[2][2], QACC respectively, define the same class of operators, leaving q>2q > 2 as an open question. Our result resolves this question, proving that QACwf(0)=^{(0)}_{wf} = QACC[q]=[q] = QACC for all qq. We also develop techniques for proving upper bounds for QACC in terms of related language classes. We define classes of languages EQACC, NQACC and BQACC_{\rats}. We define a notion logā”\log-planar QACC operators and show the appropriately restricted versions of EQACC and NQACC are contained in P/poly. We also define a notion of logā”\log-gate restricted QACC operators and show the appropriately restricted versions of EQACC and NQACC are contained in TC(0)^{(0)}. To do this last proof, we show that TC(0)^{(0)} can perform iterated addition and multiplication in certain field extensions. We also introduce the notion of a polynomial-size tensor graph and show that families of such graphs can encode the amplitudes resulting from apply an arbitrary QACC operator to an initial state.Comment: 22 pages, 4 figures This version will appear in the July 2000 Computational Complexity conference. Section 4 has been significantly revised and many typos correcte

    Continuous-variable quantum neural networks

    Full text link
    We introduce a general method for building neural networks on quantum computers. The quantum neural network is a variational quantum circuit built in the continuous-variable (CV) architecture, which encodes quantum information in continuous degrees of freedom such as the amplitudes of the electromagnetic field. This circuit contains a layered structure of continuously parameterized gates which is universal for CV quantum computation. Affine transformations and nonlinear activation functions, two key elements in neural networks, are enacted in the quantum network using Gaussian and non-Gaussian gates, respectively. The non-Gaussian gates provide both the nonlinearity and the universality of the model. Due to the structure of the CV model, the CV quantum neural network can encode highly nonlinear transformations while remaining completely unitary. We show how a classical network can be embedded into the quantum formalism and propose quantum versions of various specialized model such as convolutional, recurrent, and residual networks. Finally, we present numerous modeling experiments built with the Strawberry Fields software library. These experiments, including a classifier for fraud detection, a network which generates Tetris images, and a hybrid classical-quantum autoencoder, demonstrate the capability and adaptability of CV quantum neural networks
    • ā€¦
    corecore