15,179 research outputs found

    Quantum Sampling Problems, BosonSampling and Quantum Supremacy

    Full text link
    There is a large body of evidence for the potential of greater computational power using information carriers that are quantum mechanical over those governed by the laws of classical mechanics. But the question of the exact nature of the power contributed by quantum mechanics remains only partially answered. Furthermore, there exists doubt over the practicality of achieving a large enough quantum computation that definitively demonstrates quantum supremacy. Recently the study of computational problems that produce samples from probability distributions has added to both our understanding of the power of quantum algorithms and lowered the requirements for demonstration of fast quantum algorithms. The proposed quantum sampling problems do not require a quantum computer capable of universal operations and also permit physically realistic errors in their operation. This is an encouraging step towards an experimental demonstration of quantum algorithmic supremacy. In this paper, we will review sampling problems and the arguments that have been used to deduce when sampling problems are hard for classical computers to simulate. Two classes of quantum sampling problems that demonstrate the supremacy of quantum algorithms are BosonSampling and IQP Sampling. We will present the details of these classes and recent experimental progress towards demonstrating quantum supremacy in BosonSampling.Comment: Survey paper first submitted for publication in October 2016. 10 pages, 4 figures, 1 tabl

    Coherent state LOQC gates using simplified diagonal superposition resource states

    Get PDF
    In this paper we explore the possibility of fundamental tests for coherent state optical quantum computing gates [T. C. Ralph, et. al, Phys. Rev. A \textbf{68}, 042319 (2003)] using sophisticated but not unrealistic quantum states. The major resource required in these gates are state diagonal to the basis states. We use the recent observation that a squeezed single photon state (S^(r)1\hat{S}(r) \ket{1}) approximates well an odd superposition of coherent states (αα\ket{\alpha} - \ket{-\alpha}) to address the diagonal resource problem. The approximation only holds for relatively small α\alpha and hence these gates cannot be used in a scaleable scheme. We explore the effects on fidelities and probabilities in teleportation and a rotated Hadamard gate.Comment: 21 pages, 12 figure

    Development of a drive system for a sequential space camera contract modification 4(S)

    Get PDF
    The brush type dc motor and clutch were eliminated from the design of the 16 mm space sequential camera design and replaced by an electronically commutated motor. The new drive system reduces the current consumption at 24 fps to 220 mA. The drive can be programmed and controlled externally from the multipurpose programmable timer/intervalometer, as well as being controlled locally from the camera

    Boson Sampling from Gaussian States

    Full text link
    We pose a generalized Boson Sampling problem. Strong evidence exists that such a problem becomes intractable on a classical computer as a function of the number of Bosons. We describe a quantum optical processor that can solve this problem efficiently based on Gaussian input states, a linear optical network and non-adaptive photon counting measurements. All the elements required to build such a processor currently exist. The demonstration of such a device would provide the first empirical evidence that quantum computers can indeed outperform classical computers and could lead to applications

    Damping rates and frequency corrections of Kepler LEGACY stars

    Full text link
    Linear damping rates and modal frequency corrections of radial oscillation modes in selected LEGACY main-sequence stars are estimated by means of a nonadiabatic stability analysis. The selected stellar sample covers stars observed by Kepler with a large range of surface temperatures and surface gravities. A nonlocal, time-dependent convection model is perturbed to assess stability against pulsation modes. The mixing-length parameter is calibrated to the surface-convection-zone depth of a stellar model obtained from fitting adiabatic frequencies to the LEGACY observations, and two of the nonlocal convection parameters are calibrated to the corresponding LEGACY linewidth measurements. The remaining nonlocal convection parameters in the 1D calculations are calibrated so as to reproduce profiles of turbulent pressure and of the anisotropy of the turbulent velocity field of corresponding 3D hydrodynamical simulations. The atmospheric structure in the 1D stability analysis adopts a temperature-optical-depth relation derived from 3D hydrodynamical simulations. Despite the small number of parameters to adjust, we find good agreement with detailed shapes of both turbulent pressure profiles and anisotropy profiles with depth, and with damping rates as a function of frequency. Furthermore, we find the absolute modal frequency corrections, relative to a standard adiabatic pulsation calculation, to increase with surface temperature and surface gravity.Comment: accepted for publication in Monthly Notices of the Royal Astronomical Society (MNRAS); 15 pages, 8 figure

    Prospective study of colorectal cancer risk and physical activity, diabetes, blood glucose and BMI: exploring the hyperinsulinaemia hypothesis

    Get PDF
    A sedentary lifestyle, obesity, and a Westernized diet have been implicated in the aetiology of both colorectal cancer and non-insulin dependent diabetes mellitus, leading to the hypothesis that hyperinsulinaemia may promote colorectal cancer. We prospectively examined the association between colorectal cancer risk and factors related to insulin resistance and hyperinsulinaemia, including BMI, physical activity, diabetes mellitus, and blood glucose, in a cohort of 75 219 Norwegian men and women. Information on incident cases of colorectal cancer was made available from the Norwegian Cancer Registry. Reported P values are two-sided. During 12 years of follow up, 730 cases of colorectal cancer were registered. In men, but not in women, we found a negative association with leisure-time physical activity (P for trend = 0.002), with an age-adjusted RR for the highest versus the lowest category of activity of 0.54 (95% CI = 0.37–0.79). Women, but not men, with a history of diabetes were at increased risk of colorectal cancer (age-adjusted RR = 1.55; 95% CI = 1.04–2.31), as were women with non-fasting blood glucose ≥8.0 mmol l−1(age-adjusted RR = 1.98; 95% CI = 1.31–2.98) compared with glucose <8.0 mmol l−1. Overall, we found no association between BMI and risk of colorectal cancer. Additional adjustment including each of the main variables, marital status, and educational attainment did not materially change the results. We conclude that the inverse association between leisure-time physical activity and colorectal cancer in men, and the positive association between diabetes, blood glucose, and colorectal cancer in women, at least in part, support the hypothesis that insulin may act as a tumour promoter in colorectal carcinogenesis. © 2001 Cancer Research Campaign http://www.bjcancer.co

    Approximating the minimum directed tree cover

    Full text link
    Given a directed graph GG with non negative cost on the arcs, a directed tree cover of GG is a rooted directed tree such that either head or tail (or both of them) of every arc in GG is touched by TT. The minimum directed tree cover problem (DTCP) is to find a directed tree cover of minimum cost. The problem is known to be NPNP-hard. In this paper, we show that the weighted Set Cover Problem (SCP) is a special case of DTCP. Hence, one can expect at best to approximate DTCP with the same ratio as for SCP. We show that this expectation can be satisfied in some way by designing a purely combinatorial approximation algorithm for the DTCP and proving that the approximation ratio of the algorithm is max{2,ln(D+)}\max\{2, \ln(D^+)\} with D+D^+ is the maximum outgoing degree of the nodes in GG.Comment: 13 page

    Experiments with explicit filtering for LES using a finite-difference method

    Get PDF
    The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method

    Photon number discrimination without a photon counter and its application to reconstructing non-Gaussian states

    Get PDF
    The non-linearity of a conditional photon-counting measurement can be used to `de-Gaussify' a Gaussian state of light. Here we present and experimentally demonstrate a technique for photon number resolution using only homodyne detection. We then apply this technique to inform a conditional measurement; unambiguously reconstructing the statistics of the non-Gaussian one and two photon subtracted squeezed vacuum states. Although our photon number measurement relies on ensemble averages and cannot be used to prepare non-Gaussian states of light, its high efficiency, photon number resolving capabilities, and compatibility with the telecommunications band make it suitable for quantum information tasks relying on the outcomes of mean values.Comment: 4 pages, 3 figures. Theory section expanded in response to referee comment

    IGR J17254-3257, a new bursting neutron star

    Full text link
    The study of the observational properties of uncommonly long bursts from low luminosity sources with extended decay times up to several tens of minutes is important when investigating the transition from a hydrogen-rich bursting regime to a pure helium regime and from helium burning to carbon burning as predicted by current burst theories. IGR J17254-3257 is a recently discovered X-ray burster of which only two bursts have been recorded: an ordinary short type I X-ray burst, and a 15 min long burst. An upper limit to its distance is estimated to about 14.5 kpc. The broad-band spectrum of the persistent emission in the 0.3-100 keV energy band obtained using contemporaneous INTEGRAL and XMM-Newton data indicates a bolometric flux of 1.1x10^-10 erg/cm2/s corresponding, at the canonical distance of 8 kpc, to a luminosity about 8.4x10^35 erg/s between 0.1-100 keV, which translates to a mean accretion rate of about 7x10^-11 solar masses per year. The low X-ray persistent luminosity of IGR J17254-3257 seems to indicate the source may be in a state of low accretion rate usually associated with a hard spectrum in the X-ray range. The nuclear burning regime may be intermediate between pure He and mixed H/He burning. The long burst is the result of the accumulation of a thick He layer, while the short one is a prematurate H-triggered He burning burst at a slightly lower accretion rate.Comment: 4 pages, 4 figures, 1 table; accepted for publication in A&A Letters. 1 reference (Cooper & Narayan, 2007) correcte
    corecore