832 research outputs found

    Supervised learning with quantum enhanced feature spaces

    Full text link
    Machine learning and quantum computing are two technologies each with the potential for altering how computation is performed to address previously untenable problems. Kernel methods for machine learning are ubiquitous for pattern recognition, with support vector machines (SVMs) being the most well-known method for classification problems. However, there are limitations to the successful solution to such problems when the feature space becomes large, and the kernel functions become computationally expensive to estimate. A core element to computational speed-ups afforded by quantum algorithms is the exploitation of an exponentially large quantum state space through controllable entanglement and interference. Here, we propose and experimentally implement two novel methods on a superconducting processor. Both methods represent the feature space of a classification problem by a quantum state, taking advantage of the large dimensionality of quantum Hilbert space to obtain an enhanced solution. One method, the quantum variational classifier builds on [1,2] and operates through using a variational quantum circuit to classify a training set in direct analogy to conventional SVMs. In the second, a quantum kernel estimator, we estimate the kernel function and optimize the classifier directly. The two methods present a new class of tools for exploring the applications of noisy intermediate scale quantum computers [3] to machine learning.Comment: Fixed typos, added figures and discussion about quantum error mitigatio

    On the normality of pp-ary bent functions

    Full text link
    Depending on the parity of nn and the regularity of a bent function ff from Fpn\mathbb F_p^n to Fp\mathbb F_p, ff can be affine on a subspace of dimension at most n/2n/2, (n1)/2(n-1)/2 or n/21n/2- 1. We point out that many pp-ary bent functions take on this bound, and it seems not easy to find examples for which one can show a different behaviour. This resembles the situation for Boolean bent functions of which many are (weakly) n/2n/2-normal, i.e. affine on a n/2n/2-dimensional subspace. However applying an algorithm by Canteaut et.al., some Boolean bent functions were shown to be not n/2n/2- normal. We develop an algorithm for testing normality for functions from Fpn\mathbb F_p^n to Fp\mathbb F_p. Applying the algorithm, for some bent functions in small dimension we show that they do not take on the bound on normality. Applying direct sum of functions this yields bent functions with this property in infinitely many dimensions.Comment: 13 page

    The Near Field Refractor

    Full text link
    We present an abstract method in the setting of compact metric spaces which is applied to solve a number of problems in geometric optics. In particular, we solve the one source near field refraction problem. That is, we construct surfaces separating two homogenous media with different refractive indices that refract radiation emanating from the origin into a target domain contained in an n-1 dimensional hypersurface. The input and output energy are prescribed. This implies the existence of lenses focusing radiation in a prescribed manner.Comment: 39 pages, 4 figures, Annales de l'Institut Henri Poincare (C) Analyse Non Lineaire (to appear). Geometric optics, lens design, measure equations, Descartes ovals, Monge-Ampere type equation

    Existence of balanced functions that are not derivative of bent functions

    Full text link
    It is disproved the Tokareva's conjecture that any balanced boolean function of appropriate degree is a derivative of some bent function. This result is based on new upper bounds for the numbers of bent and plateaued functions.Comment: 3 page

    Upper bounds on the numbers of binary plateaued and bent functions

    Full text link
    The logarithm of the number of binary n-variable bent functions is asymptotically less than (2n)/3(2^n)/3 as n tends to infinity. Keywords: boolean function, Walsh--Hadamard transform, plateaued function, bent function, upper boun

    Representing complex data using localized principal components with application to astronomical data

    Full text link
    Often the relation between the variables constituting a multivariate data space might be characterized by one or more of the terms: ``nonlinear'', ``branched'', ``disconnected'', ``bended'', ``curved'', ``heterogeneous'', or, more general, ``complex''. In these cases, simple principal component analysis (PCA) as a tool for dimension reduction can fail badly. Of the many alternative approaches proposed so far, local approximations of PCA are among the most promising. This paper will give a short review of localized versions of PCA, focusing on local principal curves and local partitioning algorithms. Furthermore we discuss projections other than the local principal components. When performing local dimension reduction for regression or classification problems it is important to focus not only on the manifold structure of the covariates, but also on the response variable(s). Local principal components only achieve the former, whereas localized regression approaches concentrate on the latter. Local projection directions derived from the partial least squares (PLS) algorithm offer an interesting trade-off between these two objectives. We apply these methods to several real data sets. In particular, we consider simulated astrophysical data from the future Galactic survey mission Gaia.Comment: 25 pages. In "Principal Manifolds for Data Visualization and Dimension Reduction", A. Gorban, B. Kegl, D. Wunsch, and A. Zinovyev (eds), Lecture Notes in Computational Science and Engineering, Springer, 2007, pp. 180--204, http://www.springer.com/dal/home/generic/search/results?SGWID=1-40109-22-173750210-
    corecore