3,629 research outputs found

    Security of practical private randomness generation

    Full text link
    Measurements on entangled quantum systems necessarily yield outcomes that are intrinsically unpredictable if they violate a Bell inequality. This property can be used to generate certified randomness in a device-independent way, i.e., without making detailed assumptions about the internal working of the quantum devices used to generate the random numbers. Furthermore these numbers are also private, i.e., they appear random not only to the user, but also to any adversary that might possess a perfect description of the devices. Since this process requires a small initial random seed, one usually speaks of device-independent randomness expansion. The purpose of this paper is twofold. First, we point out that in most real, practical situations, where the concept of device-independence is used as a protection against unintentional flaws or failures of the quantum apparatuses, it is sufficient to show that the generated string is random with respect to an adversary that holds only classical-side information, i.e., proving randomness against quantum-side information is not necessary. Furthermore, the initial random seed does not need to be private with respect to the adversary, provided that it is generated in a way that is independent from the measured systems. The devices, though, will generate cryptographically-secure randomness that cannot be predicted by the adversary and thus one can, given access to free public randomness, talk about private randomness generation. The theoretical tools to quantify the generated randomness according to these criteria were already introduced in [S. Pironio et al, Nature 464, 1021 (2010)], but the final results were improperly formulated. The second aim of this paper is to correct this inaccurate formulation and therefore lay out a precise theoretical framework for practical device-independent randomness expansion.Comment: 18 pages. v3: important changes: the present version focuses on security against classical side-information and a discussion about the significance of these results has been added. v4: minor changes. v5: small typos correcte

    Two-dimensional lattice-fluid model with water-like anomalies

    Full text link
    We investigate a lattice-fluid model defined on a two-dimensional triangular lattice, with the aim of reproducing qualitatively some anomalous properties of water. Model molecules are of the "Mercedes Benz" type, i.e., they possess a D3 (equilateral triangle) symmetry, with three bonding arms. Bond formation depends both on orientation and local density. We work out phase diagrams, response functions, and stability limits for the liquid phase, making use of a generalized first order approximation on a triangle cluster, whose accuracy is verified, in some cases, by Monte Carlo simulations. The phase diagram displays one ordered (solid) phase which is less dense than the liquid one. At fixed pressure the liquid phase response functions show the typical anomalous behavior observed in liquid water, while, in the supercooled region, a reentrant spinodal is observed.Comment: 9 pages, 1 table, 7 figure

    Preparation contextuality powers parity-oblivious multiplexing

    Full text link
    In a noncontextual hidden variable model of quantum theory, hidden variables determine the outcomes of every measurement in a manner that is independent of how the measurement is implemented. Using a generalization of this notion to arbitrary operational theories and to preparation procedures, we demonstrate that a particular two-party information-processing task, "parity-oblivious multiplexing," is powered by contextuality in the sense that there is a limit to how well any theory described by a noncontextual hidden variable model can perform. This bound constitutes a "noncontextuality inequality" that is violated by quantum theory. We report an experimental violation of this inequality in good agreement with the quantum predictions. The experimental results also provide the first demonstration of 2-to-1 and 3-to-1 quantum random access codes. Consequently, our experimental results also demonstrate better-than-classical performance for this task. They also represent the first demonstration of 2-to-1 and 3-to-1 quantum random access codes.Comment: 7 pages, 2 figures; published version with supplementary material included as appendice

    An efficient method for the incompressible Navier-Stokes equations on irregular domains with no-slip boundary conditions, high order up to the boundary

    Full text link
    Common efficient schemes for the incompressible Navier-Stokes equations, such as projection or fractional step methods, have limited temporal accuracy as a result of matrix splitting errors, or introduce errors near the domain boundaries (which destroy uniform convergence to the solution). In this paper we recast the incompressible (constant density) Navier-Stokes equations (with the velocity prescribed at the boundary) as an equivalent system, for the primary variables velocity and pressure. We do this in the usual way away from the boundaries, by replacing the incompressibility condition on the velocity by a Poisson equation for the pressure. The key difference from the usual approaches occurs at the boundaries, where we use boundary conditions that unequivocally allow the pressure to be recovered from knowledge of the velocity at any fixed time. This avoids the common difficulty of an, apparently, over-determined Poisson problem. Since in this alternative formulation the pressure can be accurately and efficiently recovered from the velocity, the recast equations are ideal for numerical marching methods. The new system can be discretized using a variety of methods, in principle to any desired order of accuracy. In this work we illustrate the approach with a 2-D second order finite difference scheme on a Cartesian grid, and devise an algorithm to solve the equations on domains with curved (non-conforming) boundaries, including a case with a non-trivial topology (a circular obstruction inside the domain). This algorithm achieves second order accuracy (in L-infinity), for both the velocity and the pressure. The scheme has a natural extension to 3-D.Comment: 50 pages, 14 figure

    Effect of local environment and stellar mass on galaxy quenching and morphology at 0.5<z<2.00.5<z<2.0

    Full text link
    We study galactic star-formation activity as a function of environment and stellar mass over 0.5<z<2.0 using the FourStar Galaxy Evolution (ZFOURGE) survey. We estimate the galaxy environment using a Bayesian-motivated measure of the distance to the third nearest neighbor for galaxies to the stellar mass completeness of our survey, log(M/M)>9(9.5)\log(M/M_\odot)>9 (9.5) at z=1.3 (2.0). This method, when applied to a mock catalog with the photometric-redshift precision (σz/(1+z)0.02\sigma_z / (1+z) \lesssim 0.02), recovers galaxies in low- and high-density environments accurately. We quantify the environmental quenching efficiency, and show that at z> 0.5 it depends on galaxy stellar mass, demonstrating that the effects of quenching related to (stellar) mass and environment are not separable. In high-density environments, the mass and environmental quenching efficiencies are comparable for massive galaxies (log(M/M)\log (M/M_\odot)\gtrsim 10.5) at all redshifts. For lower mass galaxies (log(M/M))\log (M/M)_\odot) \lesssim 10), the environmental quenching efficiency is very low at zz\gtrsim 1.5, but increases rapidly with decreasing redshift. Environmental quenching can account for nearly all quiescent lower mass galaxies (log(M/M)\log(M/M_\odot) \sim 9-10), which appear primarily at zz\lesssim 1.0. The morphologies of lower mass quiescent galaxies are inconsistent with those expected of recently quenched star-forming galaxies. Some environmental process must transform the morphologies on similar timescales as the environmental quenching itself. The evolution of the environmental quenching favors models that combine gas starvation (as galaxies become satellites) with gas exhaustion through star-formation and outflows ("overconsumption"), and additional processes such as galaxy interactions, tidal stripping and disk fading to account for the morphological differences between the quiescent and star-forming galaxy populations.Comment: 29 pages, 15 figure, accepted for publication in Ap

    Generating Equilibrium Dark Matter Halos: Inadequacies of the Local Maxwellian Approximation

    Full text link
    We describe an algorithm for constructing N-body realizations of equilibrium spherical systems. A general form for the mass density rho(r) is used, making it possible to represent most of the popular density profiles found in the literature, including the cuspy density profiles found in high-resolution cosmological simulations. We demonstrate explicitly that our models are in equilibrium. In contrast, many existing N-body realizations of isolated systems have been constructed under the assumption that the local velocity distribution is Maxwellian. We show that a Maxwellian halo with an initial r^{-1} central density cusp immediately develops a constant-density core. Moreover, after just one crossing time the orbital anisotropy has changed over the entire system, and the initially isotropic model becomes radially anisotropic. These effects have important implications for many studies, including the survival of substructure in cold dark matter (CDM) models. Comparing the evolution and mass-loss rate of isotropic Maxwellian and self-consistent Navarro, Frenk, & White (NFW) satellites orbiting inside a static host CDM potential, we find that the former are unrealistically susceptible to tidal disruption. Thus, recent studies of the mass-loss rate and disruption timescales of substructure in CDM models may be compromized by using the Maxwellian approximation. We also demonstrate that a radially anisotropic, self-consistent NFW satellite loses mass at a rate several times higher than that of its isotropic counterpart on the same external tidal field and orbit.Comment: Accepted for publication in ApJ, 10 pages, 6 figures, LaTeX (uses emulateapj5.sty

    Intermittent search strategies

    Full text link
    This review examines intermittent target search strategies, which combine phases of slow motion, allowing the searcher to detect the target, and phases of fast motion during which targets cannot be detected. We first show that intermittent search strategies are actually widely observed at various scales. At the macroscopic scale, this is for example the case of animals looking for food ; at the microscopic scale, intermittent transport patterns are involved in reaction pathway of DNA binding proteins as well as in intracellular transport. Second, we introduce generic stochastic models, which show that intermittent strategies are efficient strategies, which enable to minimize the search time. This suggests that the intrinsic efficiency of intermittent search strategies could justify their frequent observation in nature. Last, beyond these modeling aspects, we propose that intermittent strategies could be used also in a broader context to design and accelerate search processes.Comment: 72 pages, review articl

    Structure of the Dead Sea Pull-Apart Basin From Gravity Analyses

    Get PDF
    Analyses and modeling of gravity data in the Dead Sea pull-apart basin reveal the geometry of the basin and constrain models for its evolution. The basin is located within a valley which defines the Dead Sea transform plate boundary between Africa and Arabia. Three hundred kilometers of continuous marine gravity data, collected in a lake occupying the northern part of the basin, were integrated with land gravity data from Israel and Jordan to provide coverage to 30 km either side of the basin. Free-air and variable-density Bouguer anomaly maps, a horizontal first derivative map of the Bouguer anomaly, and gravity models of profiles across and along the basin were used with existing geological and geophysical information to infer the structure of the basin. The basin is a long (132 km), narrow (7-10 km), and deep (≤10 km) full graben which is bounded by subvertical faults along its long sides. The Bouguer anomaly along the axis of the basin decreases gradually from both the northern and southern ends, suggesting that the basin sags toward the center and is not bounded by faults at its narrow ends. The surface expression of the basin is wider at its center (≤16 km) and covers the entire width of the transform valley due to the presence of shallower blocks that dip toward the basin. These blocks are interpreted to represent the widening of the basin by a passive collapse of the valley floor as the full graben deepened. The collapse was probably facilitated by movement along the normal faults that bound the transform valley. We present a model in which the geometry of the Dead Sea basin (i.e., full graben with relative along-axis symmetry) may be controlled by stretching of the entire (brittle and ductile) crust along its long axis. There is no evidence for the participation of the upper mantle in the deformation of the basin, and the Moho is not significantly elevated. The basin is probably close to being isostatically uncompensated, and thermal effects related to stretching are expected to be minimal. The amount of crustal stretching calculated from this model is 21 km and the stretching factor is 1.19. If the rate of crustal stretching is similar to the rate of relative plate motion (6 mm/yr), the basin should be ~3.5 m.y. old, in accord with geological evidence

    Phase encoding schemes for measurement device independent quantum key distribution and basis-dependent flaw

    Get PDF
    In this paper, we study the unconditional security of the so-called measurement device independent quantum key distribution (MDIQKD) with the basis-dependent flaw in the context of phase encoding schemes. We propose two schemes for the phase encoding, the first one employs a phase locking technique with the use of non-phase-randomized coherent pulses, and the second one uses conversion of standard BB84 phase encoding pulses into polarization modes. We prove the unconditional security of these schemes and we also simulate the key generation rate based on simple device models that accommodate imperfections. Our simulation results show the feasibility of these schemes with current technologies and highlight the importance of the state preparation with good fidelity between the density matrices in the two bases. Since the basis-dependent flaw is a problem not only for MDIQKD but also for standard QKD, our work highlights the importance of an accurate signal source in practical QKD systems. Note: We include the erratum of this paper in Appendix C. The correction does not affect the validity of the main conclusions reported in the paper, which is the importance of the state preparation in MDIQKD and the fact that our schemes can generate the key with the practical channel mode that we have assumed.Comment: We include the erratum of this paper in Appendix C. The correction does not affect the validity of the main conclusions reported in the pape
    corecore