5,036 research outputs found
An Exact Quantum Polynomial-Time Algorithm for Simon's Problem
We investigate the power of quantum computers when they are required to
return an answer that is guaranteed to be correct after a time that is
upper-bounded by a polynomial in the worst case. We show that a natural
generalization of Simon's problem can be solved in this way, whereas previous
algorithms required quantum polynomial time in the expected sense only, without
upper bounds on the worst-case running time. This is achieved by generalizing
both Simon's and Grover's algorithms and combining them in a novel way. It
follows that there is a decision problem that can be solved in exact quantum
polynomial time, which would require expected exponential time on any classical
bounded-error probabilistic computer if the data is supplied as a black box.Comment: 12 pages, LaTeX2e, no figures. To appear in Proceedings of the Fifth
Israeli Symposium on Theory of Computing and Systems (ISTCS'97
On The Power of Exact Quantum Polynomial Time
We investigate the power of quantum computers when they are required to
return an answer that is guaranteed correct after a time that is upper-bounded
by a polynomial in the worst case. In an oracle setting, it is shown that such
machines can solve problems that would take exponential time on any classical
bounded-error probabilistic computer.Comment: 10 pages, LaTeX2e, no figure
Lepskii Principle in Supervised Learning
In the setting of supervised learning using reproducing kernel methods, we
propose a data-dependent regularization parameter selection rule that is
adaptive to the unknown regularity of the target function and is optimal both
for the least-square (prediction) error and for the reproducing kernel Hilbert
space (reconstruction) norm error. It is based on a modified Lepskii balancing
principle using a varying family of norms
Quantum Amplitude Amplification and Estimation
Consider a Boolean function that partitions set
between its good and bad elements, where is good if and bad
otherwise. Consider also a quantum algorithm such that is a quantum superposition of the
elements of , and let denote the probability that a good element is
produced if is measured. If we repeat the process of running ,
measuring the output, and using to check the validity of the result, we
shall expect to repeat times on the average before a solution is found.
*Amplitude amplification* is a process that allows to find a good after an
expected number of applications of and its inverse which is proportional to
, assuming algorithm makes no measurements. This is a
generalization of Grover's searching algorithm in which was restricted to
producing an equal superposition of all members of and we had a promise
that a single existed such that . Our algorithm works whether or
not the value of is known ahead of time. In case the value of is known,
we can find a good after a number of applications of and its inverse
which is proportional to even in the worst case. We show that this
quadratic speedup can also be obtained for a large family of search problems
for which good classical heuristics exist. Finally, as our main result, we
combine ideas from Grover's and Shor's quantum algorithms to perform amplitude
estimation, a process that allows to estimate the value of . We apply
amplitude estimation to the problem of *approximate counting*, in which we wish
to estimate the number of such that . We obtain optimal
quantum algorithms in a variety of settings.Comment: 32 pages, no figure
Recommended from our members
Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems
We study a non-linear statistical inverse problem, where we observe the noisy image of a quantity through a non-linear operator at some random design points. We consider the widely used Tikhonov regularization (or method of regularization) approach to estimate the quantity for the non-linear ill-posed inverse problem. The estimator is defined as the minimizer of a Tikhonov functional, which is the sum of a data misfit term and a quadratic penalty term. We develop a theoretical analysis for the minimizer of the Tikhonov regularization scheme using the concept of reproducing kernel Hilbert spaces. We discuss optimal rates of convergence for the proposed scheme, uniformly over classes of admissible solutions, defined through appropriate source conditions
Spectral Reconstruction and Isomorphism of graphs using variable neighbourhood search
The Euclidean distance between the eigenvalue sequences of graphs G and H, on the same number of vertices, is called the spectral distance between G and H. This notion is the basis of a heuristic algorithm for reconstructing a graph with prescribed spectrum. By using a graph Γ constructed from cospectral graphs G and H, we can ensure that G and H are isomorphic if and only if the spectral distance between Γ and G+K2 is zero. This construction is exploited to design a heuristic algorithm for testing graph isomorphism. We present preliminary experimental results obtained by implementing these algorithms in conjunction with a meta-heuristic known as a variable neighbourhood search
- …