8,288 research outputs found

    Performance analysis of the Least-Squares estimator in Astrometry

    Full text link
    We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated condition) the least-squares estimator is near optimal, as its performance asymptotically approaches the Cramer-Rao bound. However, we also demonstrate that, in general, there is no unbiased estimator for the astrometric position that can precisely reach the Cramer-Rao bound. We validate our theoretical analysis through simulated digital-detector observations under typical observing conditions. We show that the nominal value for the mean-square-error of the least-squares estimator (obtained from our theorem) can be used as a benchmark indicator of the expected statistical performance of the least-squares method under a wide range of conditions. Our results are valid for an idealized linear (one-dimensional) array detector where intra-pixel response changes are neglected, and where flat-fielding is achieved with very high accuracy.Comment: 35 pages, 8 figures. Accepted for publication by PAS

    Optimality of the Maximum Likelihood estimator in Astrometry

    Full text link
    The problem of astrometry is revisited from the perspective of analyzing the attainability of well-known performance limits (the Cramer-Rao bound) for the estimation of the relative position of light-emitting (usually point-like) sources on a CCD-like detector using commonly adopted estimators such as the weighted least squares and the maximum likelihood. Novel technical results are presented to determine the performance of an estimator that corresponds to the solution of an optimization problem in the context of astrometry. Using these results we are able to place stringent bounds on the bias and the variance of the estimators in close form as a function of the data. We confirm these results through comparisons to numerical simulations under a broad range of realistic observing conditions. The maximum likelihood and the weighted least square estimators are analyzed. We confirm the sub-optimality of the weighted least squares scheme from medium to high signal-to-noise found in an earlier study for the (unweighted) least squares method. We find that the maximum likelihood estimator achieves optimal performance limits across a wide range of relevant observational conditions. Furthermore, from our results, we provide concrete insights for adopting an adaptive weighted least square estimator that can be regarded as a computationally efficient alternative to the optimal maximum likelihood solution. We provide, for the first time, close-form analytical expressions that bound the bias and the variance of the weighted least square and maximum likelihood implicit estimators for astrometry using a Poisson-driven detector. These expressions can be used to formally assess the precision attainable by these estimators in comparison with the minimum variance bound.Comment: 24 pages, 7 figures, 2 tables, 3 appendices. Accepted by Astronomy & Astrophysic

    Volatilidad de Indices Accionarios: El caso del IPSA

    Get PDF
    This paper reviews the traditional ways to measure volatility which are based only on closing prices, and introduces alternative measurements that use additional information of prices during the day: opening, minimum, maximum, and closing prices. Using thVolatilidad, modelo binomial, GARCH, VIX, sesgo y eficiencia

    Aplicaciones del Modelo Binomial para el Análisis de Riesgo

    Get PDF
    In this paper we analyze two risk measures using the Binomial Model. In one case we show that the distance-to-default measure is indeed a Z-statistic. In an empirical application we estimate the probability of default for Chilean banks. Our second measure is a pseudo implied volatility which is obtained from a question. From a small survey we find that results are consistent with market values. Finally, we consider the worst case scenario analysis applied to Value at Risk and to callable bonds.

    The Impact of Transcriptomics on the Fight against Tuberculosis: Focus on Biomarkers, BCG Vaccination, and Immunotherapy

    Get PDF
    In 1882 Robert Koch identified Mycobacterium tuberculosis as the causative agent of tuberculosis (TB), a disease as ancient as humanity. Although there has been more than 125 years of scientific effort aimed at understanding the disease, serious problems in TB persist that contribute to the estimated 1/3 of the world population infected with this pathogen. Nonetheless, during the first decade of the 21st century, there were new advances in the fight against TB. The development of high-throughput technologies is one of the major contributors to this advance, because it allows for a global vision of the biological phenomenon. This paper analyzes how transcriptomics are supporting the translation of basic research into therapies by resolving three key issues in the fight against TB: (a) the discovery of biomarkers, (b) the explanation of the variability of protection conferred by BCG vaccination, and (c) the development of new immunotherapeutic strategies to treat TB
    corecore