327,129 research outputs found

    Quantum Circuit Design for Solving Linear Systems of Equations

    Full text link
    Recently, it is shown that quantum computers can be used for obtaining certain information about the solution of a linear system Ax=b exponentially faster than what is possible with classical computation. Here we first review some key aspects of the algorithm from the standpoint of finding its efficient quantum circuit implementation using only elementary quantum operations, which is important for determining the potential usefulness of the algorithm in practical settings. Then we present a small-scale quantum circuit that solves a 2x2 linear system. The quantum circuit uses only 4 qubits, implying a tempting possibility for experimental realization. Furthermore, the circuit is numerically simulated and its performance under different circuit parameter settings is demonstrated.Comment: 7 pages, 3 figures. The errors are corrected. For the general case, discussions are added to account for recent results. The 4x4 example is replaced by a 2x2 one due to recent experimental efforts. The 2x2 example was devised at the time of writing v1 but not included in v1 for brevit

    On strongly polynomial algorithms for some classes of quadratic programming problems

    Get PDF
    In this paper we survey some results concerning polynomial and/or strongly polynomial solvability of some classes of quadratic programming problems. The discussion on polynomial solvability of continuous convex quadratic programming is followed by a couple of models for quadratic integer programming which, due to their special structure, allow polynomial (or even strongly polynomial) solvability. The theoretical merit of those results stems from the fact that a running time (i.e. the number of elementary arithmetic operations) of a strongly polynomial algorithm is independent of the input size of the problem

    On the Interpretation of Energy as the Rate of Quantum Computation

    Full text link
    Over the last few decades, developments in the physical limits of computing and quantum computing have increasingly taught us that it can be helpful to think about physics itself in computational terms. For example, work over the last decade has shown that the energy of a quantum system limits the rate at which it can perform significant computational operations, and suggests that we might validly interpret energy as in fact being the speed at which a physical system is "computing," in some appropriate sense of the word. In this paper, we explore the precise nature of this connection. Elementary results in quantum theory show that the Hamiltonian energy of any quantum system corresponds exactly to the angular velocity of state-vector rotation (defined in a certain natural way) in Hilbert space, and also to the rate at which the state-vector's components (in any basis) sweep out area in the complex plane. The total angle traversed (or area swept out) corresponds to the action of the Hamiltonian operator along the trajectory, and we can also consider it to be a measure of the "amount of computational effort exerted" by the system, or effort for short. For any specific quantum or classical computational operation, we can (at least in principle) calculate its difficulty, defined as the minimum effort required to perform that operation on a worst-case input state, and this in turn determines the minimum time required for quantum systems to carry out that operation on worst-case input states of a given energy. As examples, we calculate the difficulty of some basic 1-bit and n-bit quantum and classical operations in an simple unconstrained scenario.Comment: Revised to address reviewer comments. Corrects an error relating to time-ordering, adds some additional references and discussion, shortened in a few places. Figures now incorporated into tex

    Globally convergent block-coordinate techniques for unconstrained optimization.

    Get PDF
    In this paper we define new classes of globally convergent block-coordinate techniques for the unconstrained minimization of a continuously differentiable function. More specifically, we first describe conceptual models of decomposition algorithms based on the interconnection of elementary operations performed on the block components of the variable vector. Then we characterize the elementary operations defined through a suitable line search or the global minimization in a component subspace. Using these models, we establish new results on the convergence of the nonlinear Gauss–Seidel method and we prove that this method with a two-block decomposition is globally convergent towards stationary points, even in the absence of convexity or uniqueness assumptions. In the general case of nonconvex objective function and arbitrary decomposition we define new globally convergent line-search-based schemes that may also include partial global inimizations with respect to some component. Computational aspects are discussed and, in particular, an application to a learning problem in a Radial Basis Function neural network is illustrated
    • …
    corecore