14,880 research outputs found

    Exact Boson Sampling using Gaussian continuous variable measurements

    Get PDF
    BosonSampling is a quantum mechanical task involving Fock basis state preparation and detection and evolution using only linear interactions. A classical algorithm for producing samples from this quantum task cannot be efficient unless the polynomial hierarchy of complexity classes collapses, a situation believe to be highly implausible. We present method for constructing a device which uses Fock state preparations, linear interactions and Gaussian continuous-variable measurements for which one can show exact sampling would be hard for a classical algorithm in the same way as Boson Sampling. The detection events used from this arrangement does not allow a similar conclusion for the classical hardness of approximate sampling to be drawn. We discuss the details of this result outlining some specific properties that approximate sampling hardness requires

    A zonal computational procedure adapted to the optimization of two-dimensional thrust augmentor inlets

    Get PDF
    A viscous-inviscid interaction methodology based on a zonal description of the flowfield is developed as a mean of predicting the performance of two-dimensional thrust augmenting ejectors. An inviscid zone comprising the irrotational flow about the device is patched together with a viscous zone containing the turbulent mixing flow. The inviscid region is computed by a higher order panel method, while an integral method is used for the description of the viscous part. A non-linear, constrained optimization study is undertaken for the design of the inlet region. In this study, the viscous-inviscid analysis is complemented with a boundary layer calculation to account for flow separation from the walls of the inlet region. The thrust-based Reynolds number as well as the free stream velocity are shown to be important parameters in the design of a thrust augmentor inlet

    Conditional Production of Superpositions of Coherent States with Inefficient Photon Detection

    Get PDF
    It is shown that a linear superposition of two macroscopically distinguishable optical coherent states can be generated using a single photon source and simple all-optical operations. Weak squeezing on a single photon, beam mixing with an auxiliary coherent state, and photon detecting with imperfect threshold detectors are enough to generate a coherent state superposition in a free propagating optical field with a large coherent amplitude (α>2\alpha>2) and high fidelity (F>0.99F>0.99). In contrast to all previous schemes to generate such a state, our scheme does not need photon number resolving measurements nor Kerr-type nonlinear interactions. Furthermore, it is robust to detection inefficiency and exhibits some resilience to photon production inefficiency.Comment: Some important new results added, to appear in Phys.Rev.A (Rapid Communication

    Comparison of LOQC C-sign gates with ancilla inefficiency and an improvement to functionality under these conditions

    Get PDF
    We compare three proposals for non-deterministic C-sign gates implemented using linear optics and conditional measurements with non-ideal ancilla mode production and detection. The simplified KLM gate [Ralph et al, Phys.Rev.A {\bf 65}, 012314 (2001)] appears to be the most resilient under these conditions. We also find that the operation of this gate can be improved by adjusting the beamsplitter ratios to compensate to some extent for the effects of the imperfect ancilla.Comment: to appear in PR

    Boson Sampling from Gaussian States

    Full text link
    We pose a generalized Boson Sampling problem. Strong evidence exists that such a problem becomes intractable on a classical computer as a function of the number of Bosons. We describe a quantum optical processor that can solve this problem efficiently based on Gaussian input states, a linear optical network and non-adaptive photon counting measurements. All the elements required to build such a processor currently exist. The demonstration of such a device would provide the first empirical evidence that quantum computers can indeed outperform classical computers and could lead to applications

    Dynamic model with scale-dependent coefficients in the viscous range

    Get PDF
    The standard dynamic procedure is based on the scale-invariance assumption that the model coefficient C is the same at the grid and test-filter levels. In many applications this condition is not met, e.g. when the filter-length, delta, approaches the Kolmogorov scale, and C(delta approaches eta) approaches O. Using a priori tests, we show that the standard dynamic model yields the coefficient corresponding to the test-filter scale (alpha delta) instead of the grid-scale (delta). Several approaches to account for scale dependence are examined and/or tested in large eddy simulation of isotropic turbulence: (a) take the limit alpha approaches 1; (b) solve for two unknown coefficients C(Delta) and C(alpha delta) in the least-square-error formulation; (c) the 'bi-dynamic model', in which two test-filters (e.g. at scales 2(delta) and 4(delta) are employed to gain additional information on possible scale-dependence of the coefficient, and an improved estimate for the grid-level coefficient is obtained by extrapolation, (d) use theoretical predictions for the ratio C(alpha delta)/C(delta) and dynamically solve for C(delta). None of these options is found to be entirely satisfactory, although the last approach appears applicable to the viscous range

    Production of superpositions of coherent states in traveling optical fields with inefficient photon detection

    Get PDF
    We develop an all-optical scheme to generate superpositions of macroscopically distinguishable coherent states in traveling optical fields. It non-deterministically distills coherent state superpositions (CSSs) with large amplitudes out of CSSs with small amplitudes using inefficient photon detection. The small CSSs required to produce CSSs with larger amplitudes are extremely well approximated by squeezed single photons. We discuss some remarkable features of this scheme: it effectively purifies mixed initial states emitted from inefficient single photon sources and boosts negativity of Wigner functions of quantum states.Comment: 13 pages, 9 figures, to be published in Phys. Rev.

    Experiments with explicit filtering for LES using a finite-difference method

    Get PDF
    The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method

    New Approximability Results for the Robust k-Median Problem

    Full text link
    We consider a robust variant of the classical kk-median problem, introduced by Anthony et al. \cite{AnthonyGGN10}. In the \emph{Robust kk-Median problem}, we are given an nn-vertex metric space (V,d)(V,d) and mm client sets {SiV}i=1m\set{S_i \subseteq V}_{i=1}^m. The objective is to open a set FVF \subseteq V of kk facilities such that the worst case connection cost over all client sets is minimized; in other words, minimize maxivSid(F,v)\max_{i} \sum_{v \in S_i} d(F,v). Anthony et al.\ showed an O(logm)O(\log m) approximation algorithm for any metric and APX-hardness even in the case of uniform metric. In this paper, we show that their algorithm is nearly tight by providing Ω(logm/loglogm)\Omega(\log m/ \log \log m) approximation hardness, unless NPδ>0DTIME(2nδ){\sf NP} \subseteq \bigcap_{\delta >0} {\sf DTIME}(2^{n^{\delta}}). This hardness result holds even for uniform and line metrics. To our knowledge, this is one of the rare cases in which a problem on a line metric is hard to approximate to within logarithmic factor. We complement the hardness result by an experimental evaluation of different heuristics that shows that very simple heuristics achieve good approximations for realistic classes of instances.Comment: 19 page

    Search for subgrid scale parameterization by projection pursuit regression

    Get PDF
    The dependence of subgrid-scale stresses on variables of the resolved field is studied using direct numerical simulations of isotropic turbulence, homogeneous shear flow, and channel flow. The projection pursuit algorithm, a promising new regression tool for high-dimensional data, is used to systematically search through a large collection of resolved variables, such as components of the strain rate, vorticity, velocity gradients at neighboring grid points, etc. For the case of isotropic turbulence, the search algorithm recovers the linear dependence on the rate of strain (which is necessary to transfer energy to subgrid scales) but is unable to determine any other more complex relationship. For shear flows, however, new systematic relations beyond eddy viscosity are found. For the homogeneous shear flow, the results suggest that products of the mean rotation rate tensor with both the fluctuating strain rate and fluctuating rotation rate tensors are important quantities in parameterizing the subgrid-scale stresses. A model incorporating these terms is proposed. When evaluated with direct numerical simulation data, this model significantly increases the correlation between the modeled and exact stresses, as compared with the Smagorinsky model. In the case of channel flow, the stresses are found to correlate with products of the fluctuating strain and rotation rate tensors. The mean rates of rotation or strain do not appear to be important in this case, and the model determined for homogeneous shear flow does not perform well when tested with channel flow data. Many questions remain about the physical mechanisms underlying these findings, about possible Reynolds number dependence, and, given the low level of correlations, about their impact on modeling. Nevertheless, demonstration of the existence of causal relations between sgs stresses and large-scale characteristics of turbulent shear flows, in addition to those necessary for energy transfer, provides important insight into the relation between scales in turbulent flows
    corecore