30 research outputs found

    Probabilistic Logarithmic-Space Algorithms for Laplacian Solvers

    Get PDF
    A recent series of breakthroughs initiated by Spielman and Teng culminated in the construction of nearly linear time Laplacian solvers, approximating the solution of a linear system Lx=b, where L is the normalized Laplacian of an undirected graph. In this paper we study the space complexity of the problem. Surprisingly we are able to show a probabilistic, logspace algorithm solving the problem. We further extend the algorithm to other families of graphs like Eulerian graphs (and directed regular graphs) and graphs that mix in polynomial time. Our approach is to pseudo-invert the Laplacian, by first "peeling-off" the problematic kernel of the operator, and then to approximate the inverse of the remaining part by using a Taylor series. We approximate the Taylor series using a previous work and the special structure of the problem. For directed graphs we exploit in the analysis the Jordan normal form and results from matrix functions

    A Complete Characterization of Unitary Quantum Space

    Get PDF
    Motivated by understanding the power of quantum computation with restricted number of qubits, we give two complete characterizations of unitary quantum space bounded computation. First we show that approximating an element of the inverse of a well-conditioned efficiently encoded 2^k(n) x 2^k(n) matrix is complete for the class of problems solvable by quantum circuits acting on O(k(n)) qubits with all measurements at the end of the computation. Similarly, estimating the minimum eigenvalue of an efficiently encoded Hermitian 2^k(n) x 2^k(n) matrix is also complete for this class. In the logspace case, our results improve on previous results of Ta-Shma by giving new space-efficient quantum algorithms that avoid intermediate measurements, as well as showing matching hardness results. Additionally, as a consequence we show that preciseQMA, the version of QMA with exponentially small completeness-soundess gap, is equal to PSPACE. Thus, the problem of estimating the minimum eigenvalue of a local Hamiltonian to inverse exponential precision is PSPACE-complete, which we show holds even in the frustration-free case. Finally, we can use this characterization to give a provable setting in which the ability to prepare the ground state of a local Hamiltonian is more powerful than the ability to prepare PEPS states. Interestingly, by suitably changing the parameterization of either of these problems we can completely characterize the power of quantum computation with simultaneously bounded time and space

    Eigenmeasures and stochastic diagonalization of bilinear maps

    Full text link
    [EN] A new stochastic approach is presented to understand general spectral type problems for (not necessarily linear) functions between topological spaces. In order to show its potential applications, we construct the theory for the case of bilinear forms acting in couples of a Banach space and its dual. Our method consists of using integral representations of bilinear maps that satisfy particular domination properties, which is shown to be equivalent to having a certain spectral structure. Thus, we develop a measure-based technique for the characterization of bilinear operators having a spectral representation, introducing the notion of eigenmeasure, which will become the central tool of our formalism. Specific applications are provided for operators between finite and infinite dimensional linear spaces.Ministerio de Ciencia, Innovacion y Universidades; Agencia Estatal de investigacion; FEDER, Grant/Award Number: MTM2016-77054-C2-1-PErdogan, E.; Sánchez Pérez, EA. (2021). Eigenmeasures and stochastic diagonalization of bilinear maps. Mathematical Methods in the Applied Sciences. 44(6):5021-5039. https://doi.org/10.1002/mma.70855021503944

    QUANTUM ALGORITHMS FOR DIFFERENTIAL EQUATIONS

    Get PDF
    This thesis describes quantum algorithms for Hamiltonian simulation, ordinary differential equations (ODEs), and partial differential equations (PDEs). Product formulas are used to simulate Hamiltonians which can be expressed as a sum of terms which can each be simulated individually. By simulating each of these terms in sequence, the net effect approximately simulates the total Hamiltonian. We find that the error of product formulas can be improved by randomizing over the order in which the Hamiltonian terms are simulated. We prove that this approach is asymptotically better than ordinary product formulas and present numerical comparisons for small numbers of qubits. The ODE algorithm applies to the initial value problem for time-independent first order linear ODEs. We approximate the propagator of the ODE by a truncated Taylor series, and we encode the initial value problem in a large linear system. We solve this linear system with a quantum linear system algorithm (QLSA) whose output we perform a post-selective measurement on. The resulting state encodes the solution to the initial value problem. We prove that our algorithm is asymptotically optimal with respect to several system parameters. The PDE algorithms apply the finite difference method (FDM) to Poisson's equation, the wave equation, and the Klein-Gordon equation. We use high order FDM approximations of the Laplacian operator to develop linear systems for Poisson's equation in cubic volumes under periodic, Neumann, and Dirichlet boundary conditions. Using QLSAs, we output states encoding solutions to Poisson's equation. We prove that our algorithm is exponentially faster with respect to the spatial dimension than analogous classical algorithms. We also consider how high order Laplacian approximations can be used for simulating the wave and Klein-Gordon equations. We consider under what conditions it suffices to use Hamiltonian simulation for time evolution, and we propose an algorithm for these cases that uses QLSAs for state preparation and post-processing

    Constructive Approximation and Learning by Greedy Algorithms

    Get PDF
    This thesis develops several kernel-based greedy algorithms for different machine learning problems and analyzes their theoretical and empirical properties. Greedy approaches have been extensively used in the past for tackling problems in combinatorial optimization where finding even a feasible solution can be a computationally hard problem (i.e., not solvable in polynomial time). A key feature of greedy algorithms is that a solution is constructed recursively from the smallest constituent parts. In each step of the constructive process a component is added to the partial solution from the previous step and, thus, the size of the optimization problem is reduced. The selected components are given by optimization problems that are simpler and easier to solve than the original problem. As such schemes are typically fast at constructing a solution they can be very effective on complex optimization problems where finding an optimal/good solution has a high computational cost. Moreover, greedy solutions are rather intuitive and the schemes themselves are simple to design and easy to implement. There is a large class of problems for which greedy schemes generate an optimal solution or a good approximation of the optimum. In the first part of the thesis, we develop two deterministic greedy algorithms for optimization problems in which a solution is given by a set of functions mapping an instance space to the space of reals. The first of the two approaches facilitates data understanding through interactive visualization by providing means for experts to incorporate their domain knowledge into otherwise static kernel principal component analysis. This is achieved by greedily constructing embedding directions that maximize the variance at data points (unexplained by the previously constructed embedding directions) while adhering to specified domain knowledge constraints. The second deterministic greedy approach is a supervised feature construction method capable of addressing the problem of kernel choice. The goal of the approach is to construct a feature representation for which a set of linear hypotheses is of sufficient capacity — large enough to contain a satisfactory solution to the considered problem and small enough to allow good generalization from a small number of training examples. The approach mimics functional gradient descent and constructs features by fitting squared error residuals. We show that the constructive process is consistent and provide conditions under which it converges to the optimal solution. In the second part of the thesis, we investigate two problems for which deterministic greedy schemes can fail to find an optimal solution or a good approximation of the optimum. This happens as a result of making a sequence of choices which take into account only the immediate reward without considering the consequences onto future decisions. To address this shortcoming of deterministic greedy schemes, we propose two efficient randomized greedy algorithms which are guaranteed to find effective solutions to the corresponding problems. In the first of the two approaches, we provide a mean to scale kernel methods to problems with millions of instances. An approach, frequently used in practice, for this type of problems is the Nyström method for low-rank approximation of kernel matrices. A crucial step in this method is the choice of landmarks which determine the quality of the approximation. We tackle this problem with a randomized greedy algorithm based on the K-means++ cluster seeding scheme and provide a theoretical and empirical study of its effectiveness. In the second problem for which a deterministic strategy can fail to find a good solution, the goal is to find a set of objects from a structured space that are likely to exhibit an unknown target property. This discrete optimization problem is of significant interest to cyclic discovery processes such as de novo drug design. We propose to address it with an adaptive Metropolis–Hastings approach that samples candidates from the posterior distribution of structures conditioned on them having the target property. The proposed constructive scheme defines a consistent random process and our empirical evaluation demonstrates its effectiveness across several different application domains
    corecore