3,529 research outputs found

    Two-dimensional Site-Bond Percolation as an Example of Self-Averaging System

    Full text link
    The Harris-Aharony criterion for a statistical model predicts, that if a specific heat exponent α≄0\alpha \ge 0, then this model does not exhibit self-averaging. In two-dimensional percolation model the index α=−1/2\alpha=-{1/2}. It means that, in accordance with the Harris-Aharony criterion, the model can exhibit self-averaging properties. We study numerically the relative variances RMR_{M} and RχR_{\chi} for the probability MM of a site belongin to the "infinite" (maximum) cluster and the mean finite cluster size χ\chi. It was shown, that two-dimensional site-bound percolation on the square lattice, where the bonds play the role of impurity and the sites play the role of the statistical ensemble, over which the averaging is performed, exhibits self-averaging properties.Comment: 15 pages, 5 figure

    Upper limits on gravitational-wave signals based on loudest events

    Full text link
    Searches for gravitational-wave bursts have often focused on the loudest event(s) in searching for detections and in determining upper limits on astrophysical populations. Typical upper limits have been reported on event rates and event amplitudes which can then be translated into constraints on astrophysical populations. We describe the mathematical construction of such upper limits.Comment: 8 pages, 1 figur

    On quantum error-correction by classical feedback in discrete time

    Full text link
    We consider the problem of correcting the errors incurred from sending quantum information through a noisy quantum environment by using classical information obtained from a measurement on the environment. For discrete time Markovian evolutions, in the case of fixed measurement on the environment, we give criteria for quantum information to be perfectly corrigible and characterize the related feedback. Then we analyze the case when perfect correction is not possible and, in the qubit case, we find optimal feedback maximizing the channel fidelity.Comment: 11 pages, 1 figure, revtex

    From Black Strings to Black Holes

    Get PDF
    Using recently developed numerical methods, we examine neutral compactified non-uniform black strings which connect to the Gregory-Laflamme critical point. By studying the geometry of the horizon we give evidence that this branch of solutions may connect to the black hole solutions, as conjectured by Kol. We find the geometry of the topology changing solution is likely to be nakedly singular at the point where the horizon radius is zero. We show that these solutions can all be expressed in the coordinate system discussed by Harmark and Obers.Comment: 6 pages, 5 figures, RevTe

    Caged Black Holes: Black Holes in Compactified Spacetimes II - 5d Numerical Implementation

    Full text link
    We describe the first convergent numerical method to determine static black hole solutions (with S^3 horizon) in 5d compactified spacetime. We obtain a family of solutions parametrized by the ratio of the black hole size and the size of the compact extra dimension. The solutions satisfy the demanding integrated first law. For small black holes our solutions approach the 5d Schwarzschild solution and agree very well with new theoretical predictions for the small corrections to thermodynamics and geometry. The existence of such black holes is thus established. We report on thermodynamical (temperature, entropy, mass and tension along the compact dimension) and geometrical measurements. Most interestingly, for large masses (close to the Gregory-Laflamme critical mass) the scheme destabilizes. We interpret this as evidence for an approach to a physical tachyonic instability. Using extrapolation we speculate that the system undergoes a first order phase transition.Comment: 42 pages, 19 eps figures; v2: 3 references added, version to appear in Phys.Rev.

    Adaptive single-shot phase measurements: The full quantum theory

    Full text link
    The phase of a single-mode field can be measured in a single-shot measurement by interfering the field with an effectively classical local oscillator of known phase. The standard technique is to have the local oscillator detuned from the system (heterodyne detection) so that it is sometimes in phase and sometimes in quadrature with the system over the course of the measurement. This enables both quadratures of the system to be measured, from which the phase can be estimated. One of us [H.M. Wiseman, Phys. Rev. Lett. 75, 4587 (1995)] has shown recently that it is possible to make a much better estimate of the phase by using an adaptive technique in which a resonant local oscillator has its phase adjusted by a feedback loop during the single-shot measurement. In Ref.~[H.M. Wiseman and R.B. Killip, Phys. Rev. A 56, 944] we presented a semiclassical analysis of a particular adaptive scheme, which yielded asymptotic results for the phase variance of strong fields. In this paper we present an exact quantum mechanical treatment. This is necessary for calculating the phase variance for fields with small photon numbers, and also for considering figures of merit other than the phase variance. Our results show that an adaptive scheme is always superior to heterodyne detection as far as the variance is concerned. However the tails of the probability distribution are surprisingly high for this adaptive measurement, so that it does not always result in a smaller probability of error in phase-based optical communication.Comment: 17 pages, LaTeX, 8 figures (concatenated), Submitted to Phys. Rev.

    Static Axisymmetric Vacuum Solutions and Non-Uniform Black Strings

    Get PDF
    We describe new numerical methods to solve the static axisymmetric vacuum Einstein equations in more than four dimensions. As an illustration, we study the compactified non-uniform black string phase connected to the uniform strings at the Gregory-Laflamme critical point. We compute solutions with a ratio of maximum to minimum horizon radius up to nine. For a fixed compactification radius, the mass of these solutions is larger than the mass of the classically unstable uniform strings. Thus they cannot be the end state of the instability.Comment: 48 pages, 13 colour figures; v2: references correcte

    In-loop squeezing is real squeezing to an in-loop atom

    Full text link
    Electro-optical feedback can produce an in-loop photocurrent with arbitrarily low noise. This is not regarded as evidence of `real' squeezing because squeezed light cannot be extracted from the loop using a linear beam splitter. Here I show that illuminating an atom (which is a nonlinear optical element) with `in-loop' squeezed light causes line-narrowing of one quadrature of the atom's fluorescence. This has long been regarded as an effect which can only be produced by squeezing. Experiments on atoms using in-loop squeezing should be much easier than those with conventional sources of squeezed light.Comment: 4 pages, 2 figures, submitted to PR

    Multiple-copy state discrimination: Thinking globally, acting locally

    Full text link
    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies NN, and in the asymptotic limit as N→∞N \rightarrow \infty. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here, adaptive measurements are those for which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite NN) and scaling of this error in the asymptotic limit. In the asymptotic limit, adaptive schemes have no advantage over the optimal fixed local scheme, and except for states with less than 2% mixture, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme. For finite NN, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme, for any degree of mixture.Comment: 11 pages, 14 figure

    Using weak values to experimentally determine "negative probabilities" in a two-photon state with Bell correlations

    Full text link
    Bipartite quantum entangled systems can exhibit measurement correlations that violate Bell inequalities, revealing the profoundly counter-intuitive nature of the physical universe. These correlations reflect the impossibility of constructing a joint probability distribution for all values of all the different properties observed in Bell inequality tests. Physically, the impossibility of measuring such a distribution experimentally, as a set of relative frequencies, is due to the quantum back-action of projective measurements. Weakly coupling to a quantum probe, however, produces minimal back-action, and so enables a weak measurement of the projector of one observable, followed by a projective measurement of a non-commuting observable. By this technique it is possible to empirically measure weak-valued probabilities for all of the values of the observables relevant to a Bell test. The marginals of this joint distribution, which we experimentally determine, reproduces all of the observable quantum statistics including a violation of the Bell inequality, which we independently measure. This is possible because our distribution, like the weak values for projectors on which it is built, is not constrained to the interval [0, 1]. It was first pointed out by Feynman that, for explaining singlet-state correlations within "a [local] hidden variable view of nature ... everything works fine if we permit negative probabilities". However, there are infinitely many such theories. Our method, involving "weak-valued probabilities", singles out a unique set of probabilities, and moreover does so empirically.Comment: 9 pages, 3 figure
    • 

    corecore