21 research outputs found

    A Simple Quantum Neural Net with a Periodic Activation Function

    Full text link
    In this paper, we propose a simple neural net that requires only O(nlog2k)O(nlog_2k) number of qubits and O(nk)O(nk) quantum gates: Here, nn is the number of input parameters, and kk is the number of weights applied to these parameters in the proposed neural net. We describe the network in terms of a quantum circuit, and then draw its equivalent classical neural net which involves O(kn)O(k^n) nodes in the hidden layer. Then, we show that the network uses a periodic activation function of cosine values of the linear combinations of the inputs and weights. The backpropagation is described through the gradient descent, and then iris and breast cancer datasets are used for the simulations. The numerical results indicate the network can be used in machine learning problems and it may provide exponential speedup over the same structured classical neural net.Comment: a discussion session is added. 5 pages, conference paper. To appear in The 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018

    Federated learning with distributed fixed design quantum chips and quantum channels

    Full text link
    The privacy in classical federated learning can be breached through the use of local gradient results along with engineered queries to the clients. However, quantum communication channels are considered more secure because a measurement on the channel causes a loss of information, which can be detected by the sender. Therefore, the quantum version of federated learning can be used to provide more privacy. Additionally, sending an NN dimensional data vector through a quantum channel requires sending logN\log N entangled qubits, which can potentially provide exponential efficiency if the data vector is utilized as quantum states. In this paper, we propose a quantum federated learning model where fixed design quantum chips are operated based on the quantum states sent by a centralized server. Based on the coming superposition states, the clients compute and then send their local gradients as quantum states to the server, where they are aggregated to update parameters. Since the server does not send model parameters, but instead sends the operator as a quantum state, the clients are not required to share the model. This allows for the creation of asynchronous learning models. In addition, the model as a quantum state is fed into client-side chips directly; therefore, it does not require measurements on the upcoming quantum state to obtain model parameters in order to compute gradients. This can provide efficiency over the models where the parameter vector is sent via classical or quantum channels and local gradients are obtained through the obtained values of these parameters.Comment: a few typos are correcte

    The quantum version of the shifted power method and its application in quadratic binary optimization

    Full text link
    In this paper, we present a direct quantum adaptation of the classical shifted power method. The method is very similar to the iterative phase estimation algorithm; however, it does not require any initial estimate of an eigenvector and as in the classical case its convergence and the required number of iterations are directly related to the eigengap. If the amount of the gap is in the order of 1/poly(n)1/poly(n), then the algorithm can converge to the dominant eigenvalue in O(poly(n))O(poly(n)) time. The method can be potentially used for solving any eigenvalue related problem and finding minimum/maximum of a quantum state in lieu of Grover's search algorithm. In addition, if the solution space of an optimization problem with nn parameters is encoded as the eigenspace of an 2n2^n dimensional unitary operator in O(poly(n))O(poly(n)) time and the eigengap is not too small, then the solution for such a problem can be found in O(poly(n))O(poly(n)). As an example, using the quantum gates, we show how to generate the solution space of the quadratic unconstrained binary optimization as the eigenvectors of a diagonal unitary matrix and find the solution for the problem

    A Simple Quantum Blockmodeling with Qubits and Permutations

    Full text link
    Blockmodeling of a given problem represented by an N×NN\times N adjacency matrix can be found by swapping rows and columns of the matrix (i.e. multiplying matrix from left and right by a permutation matrix). In general, through performing this task, row and column permutations affect the fitness value in optimization: For an N×NN\times N matrix, it requires O(N)O(N) computations to find (or update) the fitness value of a candidate solution. On quantum computers, permutations can be applied in parallel and efficiently, and their implementations can be as simple as a single qubit operation (a NOT gate on a qubit) which takes an O(1)O(1) time algorithmic step. In this paper, using permutation matrices, we describe a quantum blockmodeling for data analysis tasks. In the model, the measurement outcome of a small group of qubits are mapped to indicate the fitness value. Therefore, we show that it is possible to find or update the fitness value in O(log(N))O(log(N)) time. This lead us to show that when the number of iterations are less than log(N)log(N) time, it may be possible to reach the same solution exponentially faster on quantum computers in comparison to classical computers. In addition, since on quantum circuits the different sequence of permutations can be applied in parallel (superpositon), the machine learning task in this model can be implemented more efficiently on quantum computers.Comment: 9 page

    A unifying primary framework for quantum graph neural networks from quantum graph states

    Full text link
    Graph states are used to represent mathematical graphs as quantum states on quantum computers. They can be formulated through stabilizer codes or directly quantum gates and quantum states. In this paper we show that a quantum graph neural network model can be understood and realized based on graph states. We show that they can be used either as a parameterized quantum circuits to represent neural networks or as an underlying structure to construct graph neural networks on quantum computers.Comment: short version 6 pages, a few important typos are correcte
    corecore