83,288 research outputs found

    Quantum Theory of Superresolution for Two Incoherent Optical Point Sources

    Full text link
    Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern farfield superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars.Comment: 18 pages, 11 figures. v1: First draft. v2: Improved the presentation and added a section on the issues of unknown centroid and misalignment. v3: published in Physical Review

    Identifying the New Keynesian Phillips Curve

    Get PDF
    Phillips curves are central to discussions of inflation dynamics and monetary policy. New Keynesian Phillips curves describe how past inflation, expected future inflation, and a measure of real marginal cost or an output gap drive the current inflation rate. This paper studies the (potential) weak identification of these curves under GMM and traces this syndrome to a lack of persistence in either exogenous variables or shocks. We employ analytic methods to understand the identification problem in several statistical environments: under strict exogeneity, in a vector autoregression, and in the canonical three-equation, New Keynesian model. Given U.S., U.K., and Canadian data, we revisit the empirical evidence and construct tests and confidence intervals based on exact and pivotal Anderson-Rubin statistics that are robust to weak identification. These tests find little evidence of forward-looking inflation dynamics.Phillips curve, Keynesian, identification, inflation

    Identifying the New Keynesian Phillips curve

    Get PDF
    Phillips curves are central to discussions of inflation dynamics and monetary policy. New Keynesian Phillips curves describe how past inflation, expected future inflation, and a measure of real marginal cost or an output gap drive the current inflation rate. This paper studies the (potential) weak identification of these curves under generalized methods of moments (GMM) and traces this syndrome to a lack of persistence in either exogenous variables or shocks. The authors employ analytic methods to understand the identification problem in several statistical environments: under strict exogeneity, in a vector autoregression, and in the canonical three-equation, New Keynesian model. Given U.S., U.K., and Canadian data, they revisit the empirical evidence and construct tests and confidence intervals based on exact and pivotal Anderson-Rubin statistics that are robust to weak identification. These tests find little evidence of forward-looking inflation dynamics.

    Performance Bounds for Parameter Estimation under Misspecified Models: Fundamental findings and applications

    Full text link
    Inferring information from a set of acquired data is the main objective of any signal processing (SP) method. In particular, the common problem of estimating the value of a vector of parameters from a set of noisy measurements is at the core of a plethora of scientific and technological advances in the last decades; for example, wireless communications, radar and sonar, biomedicine, image processing, and seismology, just to name a few. Developing an estimation algorithm often begins by assuming a statistical model for the measured data, i.e. a probability density function (pdf) which if correct, fully characterizes the behaviour of the collected data/measurements. Experience with real data, however, often exposes the limitations of any assumed data model since modelling errors at some level are always present. Consequently, the true data model and the model assumed to derive the estimation algorithm could differ. When this happens, the model is said to be mismatched or misspecified. Therefore, understanding the possible performance loss or regret that an estimation algorithm could experience under model misspecification is of crucial importance for any SP practitioner. Further, understanding the limits on the performance of any estimator subject to model misspecification is of practical interest. Motivated by the widespread and practical need to assess the performance of a mismatched estimator, the goal of this paper is to help to bring attention to the main theoretical findings on estimation theory, and in particular on lower bounds under model misspecification, that have been published in the statistical and econometrical literature in the last fifty years. Secondly, some applications are discussed to illustrate the broad range of areas and problems to which this framework extends, and consequently the numerous opportunities available for SP researchers.Comment: To appear in the IEEE Signal Processing Magazin

    Optimal waveform estimation for classical and quantum systems via time-symmetric smoothing

    Full text link
    Classical and quantum theories of time-symmetric smoothing, which can be used to optimally estimate waveforms in classical and quantum systems, are derived using a discrete-time approach, and the similarities between the two theories are emphasized. Application of the quantum theory to homodyne phase-locked loop design for phase estimation with narrowband squeezed optical beams is studied. The relation between the proposed theory and Aharonov et al.'s weak value theory is also explored.Comment: 13 pages, 5 figures, v2: changed the title to a more descriptive one, corrected a minor mistake in Sec. IV, accepted by Physical Review

    Discussion of: Treelets--An adaptive multi-scale basis for sparse unordered data

    Full text link
    We would like to congratulate Lee, Nadler and Wasserman on their contribution to clustering and data reduction methods for high pp and low nn situations. A composite of clustering and traditional principal components analysis, treelets is an innovative method for multi-resolution analysis of unordered data. It is an improvement over traditional PCA and an important contribution to clustering methodology. Their paper [arXiv:0707.0481] presents theory and supporting applications addressing the two main goals of the treelet method: (1) Uncover the underlying structure of the data and (2) Data reduction prior to statistical learning methods. We will organize our discussion into two main parts to address their methodology in terms of each of these two goals. We will present and discuss treelets in terms of a clustering algorithm and an improvement over traditional PCA. We will also discuss the applicability of treelets to more general data, in particular, the application of treelets to microarray data.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS137F the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Learning and Designing Stochastic Processes from Logical Constraints

    Get PDF
    Stochastic processes offer a flexible mathematical formalism to model and reason about systems. Most analysis tools, however, start from the premises that models are fully specified, so that any parameters controlling the system's dynamics must be known exactly. As this is seldom the case, many methods have been devised over the last decade to infer (learn) such parameters from observations of the state of the system. In this paper, we depart from this approach by assuming that our observations are {\it qualitative} properties encoded as satisfaction of linear temporal logic formulae, as opposed to quantitative observations of the state of the system. An important feature of this approach is that it unifies naturally the system identification and the system design problems, where the properties, instead of observations, represent requirements to be satisfied. We develop a principled statistical estimation procedure based on maximising the likelihood of the system's parameters, using recent ideas from statistical machine learning. We demonstrate the efficacy and broad applicability of our method on a range of simple but non-trivial examples, including rumour spreading in social networks and hybrid models of gene regulation

    Hybrid Shrinkage Estimators Using Penalty Bases For The Ordinal One-Way Layout

    Full text link
    This paper constructs improved estimators of the means in the Gaussian saturated one-way layout with an ordinal factor. The least squares estimator for the mean vector in this saturated model is usually inadmissible. The hybrid shrinkage estimators of this paper exploit the possibility of slow variation in the dependence of the means on the ordered factor levels but do not assume it and respond well to faster variation if present. To motivate the development, candidate penalized least squares (PLS) estimators for the mean vector of a one-way layout are represented as shrinkage estimators relative to the penalty basis for the regression space. This canonical representation suggests further classes of candidate estimators for the unknown means: monotone shrinkage (MS) estimators or soft-thresholding (ST) estimators or, most generally, hybrid shrinkage (HS) estimators that combine the preceding two strategies. Adaptation selects the estimator within a candidate class that minimizes estimated risk. Under the Gaussian saturated one-way layout model, such adaptive estimators minimize risk asymptotically over the class of candidate estimators as the number of factor levels tends to infinity. Thereby, adaptive HS estimators asymptotically dominate adaptive MS and adaptive ST estimators as well as the least squares estimator. Local annihilators of polynomials, among them difference operators, generate penalty bases suitable for a range of numerical examples.Comment: Published at http://dx.doi.org/10.1214/009053604000000652 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • ā€¦
    corecore