36,913 research outputs found

    Does nonlinear metrology offer improved resolution? Answers from quantum information theory

    Full text link
    A number of authors have suggested that nonlinear interactions can enhance resolution of phase shifts beyond the usual Heisenberg scaling of 1/n, where n is a measure of resources such as the number of subsystems of the probe state or the mean photon number of the probe state. These suggestions are based on calculations of `local precision' for particular nonlinear schemes. However, we show that there is no simple connection between the local precision and the average estimation error for these schemes, leading to a scaling puzzle. This puzzle is partially resolved by a careful analysis of iterative implementations of the suggested nonlinear schemes. However, it is shown that the suggested nonlinear schemes are still limited to an exponential scaling in \sqrt{n}. (This scaling may be compared to the exponential scaling in n which is achievable if multiple passes are allowed, even for linear schemes.) The question of whether nonlinear schemes may have a scaling advantage in the presence of loss is left open. Our results are based on a new bound for average estimation error that depends on (i) an entropic measure of the degree to which the probe state can encode a reference phase value, called the G-asymmetry, and (ii) any prior information about the phase shift. This bound is asymptotically stronger than bounds based on the variance of the phase shift generator. The G-asymmetry is also shown to directly bound the average information gained per estimate. Our results hold for any prior distribution of the shift parameter, and generalise to estimates of any shift generated by an operator with discrete eigenvalues.Comment: 8 page

    Robust scaling in fusion science: case study for the L-H power threshold

    Get PDF
    In regression analysis for deriving scaling laws in the context of fusion studies, standard regression methods are usually applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to fusion data. More sophisticated statistical techniques are available, but they are not widely used in the fusion community and, moreover, the predictions by scaling laws may vary significantly depending on the particular regression technique. Therefore we have developed a new regression method, which we call geodesic least squares regression (GLS), that is robust in the presence of significant uncertainty on both the data and the regression model. The method is based on probabilistic modeling of all variables involved in the scaling expression, using adequate probability distributions and a natural similarity measure between them (geodesic distance). In this work we revisit the scaling law for the power threshold for the L-to-H transition in tokamaks, using data from the multi-machine ITPA databases. Depending on model assumptions, OLS can yield different predictions of the power threshold for ITER. In contrast, GLS regression delivers consistent results. Consequently, given the ubiquity and importance of scaling laws and parametric dependence studies in fusion research, GLS regression is proposed as a robust and easily implemented alternative to classic regression techniques

    Limited benefit of cooperation in distributed relative localization

    Full text link
    Important applications in robotic and sensor networks require distributed algorithms to solve the so-called relative localization problem: a node-indexed vector has to be reconstructed from measurements of differences between neighbor nodes. In a recent note, we have studied the estimation error of a popular gradient descent algorithm showing that the mean square error has a minimum at a finite time, after which the performance worsens. This paper proposes a suitable modification of this algorithm incorporating more realistic "a priori" information on the position. The new algorithm presents a performance monotonically decreasing to the optimal one. Furthermore, we show that the optimal performance is approximated, up to a 1 + \eps factor, within a time which is independent of the graph and of the number of nodes. This convergence time is very much related to the minimum exhibited by the previous algorithm and both lead to the following conclusion: in the presence of noisy data, cooperation is only useful till a certain limit.Comment: 11 pages, 2 figures, submitted to conferenc

    Quantum metrology with open dynamical systems

    Full text link
    This paper studies quantum limits to dynamical sensors in the presence of decoherence. A modified purification approach is used to obtain tighter quantum detection and estimation error bounds for optical phase sensing and optomechanical force sensing. When optical loss is present, these bounds are found to obey shot-noise scalings for arbitrary quantum states of light under certain realistic conditions, thus ruling out the possibility of asymptotic Heisenberg error scalings with respect to the average photon flux under those conditions. The proposed bounds are expected to be approachable using current quantum optics technology.Comment: v1: submitted to ISIT 2013, v2: updated with new results on detection bounds, v3: minor update, submitted, v4: accepted by New J. Phy

    Photonic polarization gears for ultra-sensitive angular measurements

    Get PDF
    Quantum metrology bears a great promise in enhancing measurement precision, but is unlikely to become practical in the near future. Its concepts can nevertheless inspire classical or hybrid methods of immediate value. Here, we demonstrate NOON-like photonic states of m quanta of angular momentum up to m=100, in a setup that acts as a "photonic gear", converting, for each photon, a mechanical rotation of an angle {\theta} into an amplified rotation of the optical polarization by m{\theta}, corresponding to a "super-resolving" Malus' law. We show that this effect leads to single-photon angular measurements with the same precision of polarization-only quantum strategies with m photons, but robust to photon losses. Moreover, we combine the gear effect with the quantum enhancement due to entanglement, thus exploiting the advantages of both approaches. The high "gear ratio" m boosts the current state-of-the-art of optical non-contact angular measurements by almost two orders of magnitude.Comment: 10 pages, 4 figures, + supplementary information (10 pages, 3 figures

    The Kalman-Levy filter

    Full text link
    The Kalman filter combines forecasts and new observations to obtain an estimation which is optimal in the sense of a minimum average quadratic error. The Kalman filter has two main restrictions: (i) the dynamical system is assumed linear and (ii) forecasting errors and observational noises are taken Gaussian. Here, we offer an important generalization to the case where errors and noises have heavy tail distributions such as power laws and L\'evy laws. The main tool needed to solve this ``Kalman-L\'evy'' filter is the ``tail-covariance'' matrix which generalizes the covariance matrix in the case where it is mathematically ill-defined (i.e. for power law tail exponents μ2\mu \leq 2). We present the general solution and discuss its properties on pedagogical examples. The standard Kalman-Gaussian filter is recovered for the case μ=2\mu = 2. The optimal Kalman-L\'evy filter is found to deviate substantially fro the standard Kalman-Gaussian filter as μ\mu deviates from 2. As μ\mu decreases, novel observations are assimilated with less and less weight as a small exponent μ\mu implies large errors with significant probabilities. In terms of implementation, the price-to-pay associated with the presence of heavy tail noise distributions is that the standard linear formalism valid for the Gaussian case is transformed into a nonlinear matrice equation for the Kalman-L\'evy filter. Direct numerical experiments in the univariate case confirms our theoretical predictions.Comment: 41 pages, 9 figures, correction of errors in the general multivariate cas

    Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation

    Full text link
    Sensor networks potentially feature large numbers of nodes that can sense their environment over time, communicate with each other over a wireless network, and process information. They differ from data networks in that the network as a whole may be designed for a specific application. We study the theoretical foundations of such large scale sensor networks, addressing four fundamental issues- connectivity, capacity, clocks and function computation. To begin with, a sensor network must be connected so that information can indeed be exchanged between nodes. The connectivity graph of an ad-hoc network is modeled as a random graph and the critical range for asymptotic connectivity is determined, as well as the critical number of neighbors that a node needs to connect to. Next, given connectivity, we address the issue of how much data can be transported over the sensor network. We present fundamental bounds on capacity under several models, as well as architectural implications for how wireless communication should be organized. Temporal information is important both for the applications of sensor networks as well as their operation.We present fundamental bounds on the synchronizability of clocks in networks, and also present and analyze algorithms for clock synchronization. Finally we turn to the issue of gathering relevant information, that sensor networks are designed to do. One needs to study optimal strategies for in-network aggregation of data, in order to reliably compute a composite function of sensor measurements, as well as the complexity of doing so. We address the issue of how such computation can be performed efficiently in a sensor network and the algorithms for doing so, for some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE
    corecore