12,344 research outputs found

    Near-ideal spontaneous photon sources in silicon quantum photonics

    Get PDF
    While integrated photonics is a robust platform for quantum information processing, architectures for photonic quantum computing place stringent demands on high quality information carriers. Sources of single photons that are highly indistinguishable and pure, that are either near-deterministic or heralded with high efficiency, and that are suitable for mass-manufacture, have been elusive. Here, we demonstrate on-chip photon sources that simultaneously meet each of these requirements. Our photon sources are fabricated in silicon using mature processes, and exploit a novel dual-mode pump-delayed excitation scheme to engineer the emission of spectrally pure photon pairs through intermodal spontaneous four-wave mixing in low-loss spiralled multi-mode waveguides. We simultaneously measure a spectral purity of 0.9904±0.00060.9904 \pm 0.0006, a mutual indistinguishably of 0.987±0.0020.987 \pm 0.002, and >90%>90\% intrinsic heralding efficiency. We measure on-chip quantum interference with a visibility of 0.96±0.020.96 \pm 0.02 between heralded photons from different sources. These results represent a decisive step for scaling quantum information processing in integrated photonics

    The Computational Complexity of Generating Random Fractals

    Full text link
    In this paper we examine a number of models that generate random fractals. The models are studied using the tools of computational complexity theory from the perspective of parallel computation. Diffusion limited aggregation and several widely used algorithms for equilibrating the Ising model are shown to be highly sequential; it is unlikely they can be simulated efficiently in parallel. This is in contrast to Mandelbrot percolation that can be simulated in constant parallel time. Our research helps shed light on the intrinsic complexity of these models relative to each other and to different growth processes that have been recently studied using complexity theory. In addition, the results may serve as a guide to simulation physics.Comment: 28 pages, LATEX, 8 Postscript figures available from [email protected]

    Classicality of quantum information processing

    Get PDF
    The ultimate goal of the classicality programme is to quantify the amount of quantumness of certain processes. Here, classicality is studied for a restricted type of process: quantum information processing (QIP). Under special conditions, one can force some qubits of a quantum computer into a classical state without affecting the outcome of the computation. The minimal set of conditions is described and its structure is studied. Some implications of this formalism are the increase of noise robustness, a proof of the quantumness of mixed state quantum computing and a step forward in understanding the very foundation of QIP.Comment: Minor changes, published in Phys. Rev. A 65, 42319 (2002

    Q-based design equations for resonant metamaterials and experimental validation

    Get PDF
    Practical design parameters of resonant metamaterials, such as loss tangent, are derived in terms of the quality factor QQ of the resonant effective medium permeability or permittivity. Through electromagnetic simulations of loop-based resonant particles, it is also shown that the QQ of the effective medium response is essentially equal to the QQ of an individual resonant particle. Thus, by measuring the QQ of a single fabricated metamaterial particle, the effective permeability or permittivity of a metamaterial can be calculated simply and accurately without requiring complex simulations, fabrication, or measurements. Experimental validation shows that the complex permeability analytically estimated from the measured QQ of a single fabricated self-resonant loop agrees with the complex permeability extracted from SS parameter measurements of a metamaterial slab to better than 20%. This QQ equivalence reduces the design of a metamaterial to meet a given loss constraint to the simpler problem of the design of a resonant particle to meet a specific QQ constraint. This analysis also yields simple analytical expressions for estimating the loss tangent of a planar loop magnetic metamaterial due to ohmic losses. It is shown that tan⁡ή≈0.001\tan \delta \approx 0.001 is a strong lower bound for magnetic loss tangents for frequencies not too far from 1 GHz. The ohmic loss of the metamaterial varies inversely with the electrical size of the metamaterial particle, indicating that there is a loss penalty for reducing the particle size at a fixed frequency

    Easiness Amplification and Uniform Circuit Lower Bounds

    Get PDF
    We present new consequences of the assumption that time-bounded algorithms can be "compressed" with non-uniform circuits. Our main contribution is an "easiness amplification" lemma for circuits. One instantiation of the lemma says: if n^{1+e}-time, tilde{O}(n)-space computations have n^{1+o(1)} size (non-uniform) circuits for some e > 0, then every problem solvable in polynomial time and tilde{O}(n) space has n^{1+o(1)} size (non-uniform) circuits as well. This amplification has several consequences: * An easy problem without small LOGSPACE-uniform circuits. For all e > 0, we give a natural decision problem, General Circuit n^e-Composition, that is solvable in about n^{1+e} time, but we prove that polynomial-time and logarithmic-space preprocessing cannot produce n^{1+o(1)}-size circuits for the problem. This shows that there are problems solvable in n^{1+e} time which are not in LOGSPACE-uniform n^{1+o(1)} size, the first result of its kind. We show that our lower bound is non-relativizing, by exhibiting an oracle relative to which the result is false. * Problems without low-depth LOGSPACE-uniform circuits. For all e > 0, 1 < d < 2, and e < d we give another natural circuit composition problem computable in tilde{O}(n^{1+e}) time, or in O((log n)^d) space (though not necessarily simultaneously) that we prove does not have SPACE[(log n)^e]-uniform circuits of tilde{O}(n) size and O((log n)^e) depth. We also show SAT does not have circuits of tilde{O}(n) size and log^{2-o(1)}(n) depth that can be constructed in log^{2-o(1)}(n) space. * A strong circuit complexity amplification. For every e > 0, we give a natural circuit composition problem and show that if it has tilde{O}(n)-size circuits (uniform or not), then every problem solvable in 2^{O(n)} time and 2^{O(sqrt{n log n})} space (simultaneously) has 2^{O(sqrt{n log n})}-size circuits (uniform or not). We also show the same consequence holds assuming SAT has tilde{O}(n)-size circuits. As a corollary, if n^{1.1} time computations (or O(n) nondeterministic time computations) have tilde{O}(n)-size circuits, then all problems in exponential time and subexponential space (such as quantified Boolean formulas) have significantly subexponential-size circuits. This is a new connection between the relative circuit complexities of easy and hard problems

    Pseudorandomness and the Minimum Circuit Size Problem

    Get PDF

    “A Considerable Surgical Operation”: Article III, Equity, and Judge-Made Law in the Federal Courts

    Get PDF
    This Article examines the history of judge-made law in the federal courts through the lens of the early-nineteenth-century federal courts’ equity powers. In a series of equity cases, and in the Federal Equity Rules promulgated by the Court in 1822 and 1842, the Supreme Court vehemently insisted that lower federal courts employ a uniform corpus of nonstate equity principles with respect to procedure, remedies, and - in certain instances - primary rights and liabilities. Careful attention to the historical sources suggests that the uniform equity doctrine was not simply the product of an overreaching, consolidationist Supreme Court, but is best understood in the context of important and surprisingly underappreciated early-nineteenth-century debates concerning judicial reform. During this period, both Congress and the Court were preoccupied with the disuniformity in the administration of the federal judicial system, especially in the farther reaches of the republic. When reform was not forthcoming through legislation, the Supreme Court achieved a modicum of uniformity in the federal courts through the application of a single body of equity principles drawn from federal and English sources. But the Court did not act unilaterally. Congress’s repeated acquiescence to, and extension of, the Court’s uniform equity doctrine reveals a complex, interbranch dynamic at work. Retelling the story of nonstate, judge-made law in the federal courts through the lens of equity is not intended to demonstrate that such a formulation of federal judicial power was (or is) correct. Rather, by recuperating the history of federal equity power, this Article illuminates the significant metamorphosis of the meaning of Article III’s grant of judicial power. This change has been elided in modern accounts of federal judge-made law in an effort to bolster the legitimacy of a modern vision of federal judicial power

    Color as a Trademark Under the Lanham Act: Confusion in the Circuits and the Need for Uniformity

    Get PDF
    The Lanham Act--the Trademark Act of 1946--is examined to determine if it allows the protection of color per se as a trademark. Circuit courts vary in their use of the legislation, but color does satisfy the Act\u27s broad definition of a trademark
    • 

    corecore