521,429 research outputs found

    Quantum Generative Adversarial Networks for Learning and Loading Random Distributions

    Full text link
    Quantum algorithms have the potential to outperform their classical counterparts in a variety of tasks. The realization of the advantage often requires the ability to load classical data efficiently into quantum states. However, the best known methods require O(2n)\mathcal{O}\left(2^n\right) gates to load an exact representation of a generic data structure into an nn-qubit state. This scaling can easily predominate the complexity of a quantum algorithm and, thereby, impair potential quantum advantage. Our work presents a hybrid quantum-classical algorithm for efficient, approximate quantum state loading. More precisely, we use quantum Generative Adversarial Networks (qGANs) to facilitate efficient learning and loading of generic probability distributions -- implicitly given by data samples -- into quantum states. Through the interplay of a quantum channel, such as a variational quantum circuit, and a classical neural network, the qGAN can learn a representation of the probability distribution underlying the data samples and load it into a quantum state. The loading requires O(poly(n))\mathcal{O}\left(poly\left(n\right)\right) gates and can, thus, enable the use of potentially advantageous quantum algorithms, such as Quantum Amplitude Estimation. We implement the qGAN distribution learning and loading method with Qiskit and test it using a quantum simulation as well as actual quantum processors provided by the IBM Q Experience. Furthermore, we employ quantum simulation to demonstrate the use of the trained quantum channel in a quantum finance application.Comment: 14 pages, 13 figure

    Rational approximation for solving an implicitly given Colebrook flow friction equation

    Get PDF
    The empirical logarithmic Colebrook equation for hydraulic resistance in pipes implicitly considers the unknown flow friction factor. Its explicit approximations, used to avoid iterative computations, should be accurate but also computationally efficient. We present a rational approximate procedure that completely avoids the use of transcendental functions, such as logarithm or non-integer power, which require execution of the additional number of floating-point operations in computer processor units. Instead of these, we use only rational expressions that are executed directly in the processor unit. The rational approximation was found using a combination of a Pade approximant and artificial intelligence (symbolic regression). Numerical experiments in Matlab using 2 million quasi-Monte Carlo samples indicate that the relative error of this new rational approximation does not exceed 0.866%. Moreover, these numerical experiments show that the novel rational approximation is approximately two times faster than the exact solution given by the Wright omega function.Web of Science81art. no. 2

    Sampling from Gaussian graphical models using subgraph perturbations

    Get PDF
    The problem of efficiently drawing samples from a Gaussian graphical model or Gaussian Markov random field is studied. We introduce the subgraph perturbation sampling algorithm, which makes use of any pre-existing tractable inference algorithm for a subgraph by perturbing this algorithm so as to yield asymptotically exact samples for the intended distribution. The subgraph can have any structure for which efficient inference algorithms exist: for example, tree-structured, low tree-width, or having a small feedback vertex set. The experimental results demonstrate that this subgraph perturbation algorithm efficiently yields accurate samples for many graph topologies.United States. Air Force Office of Scientific Research (Grant FA9550-12-1-0287

    Low-Rank Matrices on Graphs: Generalized Recovery & Applications

    Get PDF
    Many real world datasets subsume a linear or non-linear low-rank structure in a very low-dimensional space. Unfortunately, one often has very little or no information about the geometry of the space, resulting in a highly under-determined recovery problem. Under certain circumstances, state-of-the-art algorithms provide an exact recovery for linear low-rank structures but at the expense of highly inscalable algorithms which use nuclear norm. However, the case of non-linear structures remains unresolved. We revisit the problem of low-rank recovery from a totally different perspective, involving graphs which encode pairwise similarity between the data samples and features. Surprisingly, our analysis confirms that it is possible to recover many approximate linear and non-linear low-rank structures with recovery guarantees with a set of highly scalable and efficient algorithms. We call such data matrices as \textit{Low-Rank matrices on graphs} and show that many real world datasets satisfy this assumption approximately due to underlying stationarity. Our detailed theoretical and experimental analysis unveils the power of the simple, yet very novel recovery framework \textit{Fast Robust PCA on Graphs

    Smoothed Analysis of Tensor Decompositions

    Full text link
    Low rank tensor decompositions are a powerful tool for learning generative models, and uniqueness results give them a significant advantage over matrix decomposition methods. However, tensors pose significant algorithmic challenges and tensors analogs of much of the matrix algebra toolkit are unlikely to exist because of hardness results. Efficient decomposition in the overcomplete case (where rank exceeds dimension) is particularly challenging. We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension). In this setting, we show that our algorithm is robust to inverse polynomial error -- a crucial property for applications in learning since we are only allowed a polynomial number of samples. While algorithms are known for exact tensor decomposition in some overcomplete settings, our main contribution is in analyzing their stability in the framework of smoothed analysis. Our main technical contribution is to show that tensor products of perturbed vectors are linearly independent in a robust sense (i.e. the associated matrix has singular values that are at least an inverse polynomial). This key result paves the way for applying tensor methods to learning problems in the smoothed setting. In particular, we use it to obtain results for learning multi-view models and mixtures of axis-aligned Gaussians where there are many more "components" than dimensions. The assumption here is that the model is not adversarially chosen, formalized by a perturbation of model parameters. We believe this an appealing way to analyze realistic instances of learning problems, since this framework allows us to overcome many of the usual limitations of using tensor methods.Comment: 32 pages (including appendix

    Ordering Quantiles through Confidence Statements

    Get PDF
    Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes for then applying other sophisticated experiments only on genes identified as important. A nonparametric method called Quor is designed to provide a confidence value for the order of arbitrary quantiles of different populations using independent samples. This confidence may provide insights about possible differences among groups and yields a ranking of importance for the variables. Computations are efficient and use exact distributions with no need for asymptotic considerations. Experiments with simulated data and with multiple real -omics data sets are performed, and they show advantages and disadvantages of the method. Quor has no assumptions but independence of samples, thus it might be a better option when assumptions of other methods cannot be asserted. The software is publicly available on CRAN
    corecore