3,166 research outputs found
The Classical Complexity of Boson Sampling
We study the classical complexity of the exact Boson Sampling problem where
the objective is to produce provably correct random samples from a particular
quantum mechanical distribution. The computational framework was proposed by
Aaronson and Arkhipov in 2011 as an attainable demonstration of `quantum
supremacy', that is a practical quantum computing experiment able to produce
output at a speed beyond the reach of classical (that is non-quantum) computer
hardware. Since its introduction Boson Sampling has been the subject of intense
international research in the world of quantum computing. On the face of it,
the problem is challenging for classical computation. Aaronson and Arkhipov
show that exact Boson Sampling is not efficiently solvable by a classical
computer unless and the polynomial hierarchy collapses to
the third level.
The fastest known exact classical algorithm for the standard Boson Sampling
problem takes time to produce samples for a
system with input size and output modes, making it infeasible for
anything but the smallest values of and . We give an algorithm that is
much faster, running in time and
additional space. The algorithm is simple to implement and has low constant
factor overheads. As a consequence our classical algorithm is able to solve the
exact Boson Sampling problem for system sizes far beyond current photonic
quantum computing experimentation, thereby significantly reducing the
likelihood of achieving near-term quantum supremacy in the context of Boson
Sampling.Comment: 15 pages. To appear in SODA '1
Bottom Schur functions
We give a basis for the space V spanned by the lowest degree part
\hat{s}_\lambda of the expansion of the Schur symmetric functions s_\lambda in
terms of power sums, where we define the degree of the power sum p_i to be 1.
In particular, the dimension of the subspace V_n spanned by those
\hat{s}_\lambda for which \lambda is a partition of n is equal to the number of
partitions of n whose parts differ by at least 2. We also show that a symmetric
function closely related to \hat{s}_\lambda has the same coefficients when
expanded in terms of power sums or augmented monomial symmetric functions.
Proofs are based on the theory of minimal border strip decompositions of Young
diagrams.Comment: 16 pages, 13 figures To appear in the Electronic Journal of
Combinatoric
Faster classical boson sampling
Since its introduction boson sampling has been the subject of intense study in the world of quantum computing. In the context of Fock-state boson sampling, the task is to sample independently from the set of all n × n submatrices built from possibly repeated rows of a larger m × n complex matrix according to a probability distribution related to the permanents of the submatrices. Experimental systems exploiting quantum photonic effects can in principle perform the task at great speed. For classical computing, Aaronson and Arkhipov (2011) showed that exact boson sampling problem cannot be solved in polynomial time unless the polynomial hierarchy collapses to the third level. Indeed for a number of years the fastest known exact classical algorithm ran in O(m+n−1nn2n) time per sample, emphasising the potential speed advantage of quantum computation. The advantage was reduced by Clifford and Clifford (2018), who gave a significantly faster classical solution taking O(n2n+poly(m, n)) time and linear space, matching the complexity of computing the permanent of a single matrix when m is polynomial in n. We continue by presenting an algorithm for Fock boson sampling whose average-case time complexity is much faster when m is proportional to n. In particular, when m = n our algorithm runs in approximately O(n · 1.69 n ) time on average. This result further increases the problem size needed to establish quantum computational advantage via the Fock scheme of boson sampling
Comments on "what the back of the object looks like: 3D reconstruction from line drawings without hidden lines"
I comment on a paper describing a method for deducing the hidden topology of an object portrayed in a 2D natural line drawing. The principal problem with this paper is that it cannot be considered an advance on (or even an equal of) the state of the art as the approach it describes makes the same limiting assumptions as approaches proposed 10 years ago. There are also important omissions in the review of related wor
- …