13 research outputs found

    Analysis of the Bayesian Cramer-Rao lower bound in astrometry: Studying the impact of prior information in the location of an object

    Full text link
    Context. The best precision that can be achieved to estimate the location of a stellar-like object is a topic of permanent interest in the astrometric community. Aims. We analyse bounds for the best position estimation of a stellar-like object on a CCD detector array in a Bayesian setting where the position is unknown, but where we have access to a prior distribution. In contrast to a parametric setting where we estimate a parameter from observations, the Bayesian approach estimates a random object (i.e., the position is a random variable) from observations that are statistically dependent on the position. Methods. We characterize the Bayesian Cramer-Rao (CR) that bounds the minimum mean square error (MMSE) of the best estimator of the position of a point source on a linear CCD-like detector, as a function of the properties of detector, the source, and the background. Results. We quantify and analyse the increase in astrometric performance from the use of a prior distribution of the object position, which is not available in the classical parametric setting. This gain is shown to be significant for various observational regimes, in particular in the case of faint objects or when the observations are taken under poor conditions. Furthermore, we present numerical evidence that the MMSE estimator of this problem tightly achieves the Bayesian CR bound. This is a remarkable result, demonstrating that all the performance gains presented in our analysis can be achieved with the MMSE estimator. Conclusions The Bayesian CR bound can be used as a benchmark indicator of the expected maximum positional precision of a set of astrometric measurements in which prior information can be incorporated. This bound can be achieved through the conditional mean estimator, in contrast to the parametric case where no unbiased estimator precisely reaches the CR bound.Comment: 17 pages, 12 figures. Accepted for publication on Astronomy & Astrophysic

    Optimality of the Maximum Likelihood estimator in Astrometry

    Full text link
    The problem of astrometry is revisited from the perspective of analyzing the attainability of well-known performance limits (the Cramer-Rao bound) for the estimation of the relative position of light-emitting (usually point-like) sources on a CCD-like detector using commonly adopted estimators such as the weighted least squares and the maximum likelihood. Novel technical results are presented to determine the performance of an estimator that corresponds to the solution of an optimization problem in the context of astrometry. Using these results we are able to place stringent bounds on the bias and the variance of the estimators in close form as a function of the data. We confirm these results through comparisons to numerical simulations under a broad range of realistic observing conditions. The maximum likelihood and the weighted least square estimators are analyzed. We confirm the sub-optimality of the weighted least squares scheme from medium to high signal-to-noise found in an earlier study for the (unweighted) least squares method. We find that the maximum likelihood estimator achieves optimal performance limits across a wide range of relevant observational conditions. Furthermore, from our results, we provide concrete insights for adopting an adaptive weighted least square estimator that can be regarded as a computationally efficient alternative to the optimal maximum likelihood solution. We provide, for the first time, close-form analytical expressions that bound the bias and the variance of the weighted least square and maximum likelihood implicit estimators for astrometry using a Poisson-driven detector. These expressions can be used to formally assess the precision attainable by these estimators in comparison with the minimum variance bound.Comment: 24 pages, 7 figures, 2 tables, 3 appendices. Accepted by Astronomy & Astrophysic

    Searches for Neutrinos from Supernovae Using Cherenkov In-Ice Detectors

    Get PDF
    Supernovae mark the violent death of massive stars. They are among the most energetic processes known to exist in the Universe. Neutrinos play crucial roles in supernova processes. Besides the low-energy neutrinos emitted during the core-collapse process of the supernova, there may be neutrinos of much higher energies that are generated after the core-collapse. In this work, a new detector embedded in Antarctic glacier ice is studied, with sensitivity to extra-galactic supernova low-energy neutrino bursts. It is demonstrated that the development of optical sensors with large effective area and low noise rate is a requirement. For the proposed detector, several extra-galactic supernova neutrino detections per year are feasible. In addition, a multi-messenger data analysis program is carried out, which registers high-energy neutrino bursts with the IceCube detector and triggers follow-up observations with optical telescopes. No significant excess of neutrino bursts is found. Therefore, upper limits on the jet supernova model are derived. For model values of the jet Lorentz factor Γjet = 10 and the jet kinetic energy Ejet = 3 × 1051 erg, only about 8% of all core-collapse supernovae hosting a jet are consistent with the data.Suche nach Supernova-Neutrinos mit Cherenkov-Detektoren in Eis Supernovae sind gewaltige Explosionen am Lebensende massereicher Sterne. Sie gehören zu den energiereichsten bekannten Prozessen im Universum. Neutrinos spielen entscheidende Rollen während und nach Supernova-Explosionen. Neben den niederenergetischen Neutrinos, die während des Kernkollapsprozesses der Supernova emittiert werden, könnten Neutrinos von sehr viel höherer Energie nach dem Kernkollaps enstehen. In dieser Arbeit wird ein neues Detektorkonzept untersucht, mit Sensitivität für extragalaktische niederenergetische Supernova-Neutrino-Blitze. Die Notwendigkeit der Entwicklung optischer Sensoren mit großer effektiver Fläche und niedriger Rauschrate wird aufgezeigt. Der untersuchte Detektor könnte mehrere extra-galaktische Supernova-Neutrino-Detektionen pro Jahr liefern. Weiter wird ein multi-messenger Datenanalyseprogramm durchgeführt, welches hochenergetische Neutrinoblitze mit dem IceCube-Detektor registriert und automatisch Nachfolgebeobachtungen mit optischen Teleskopen auslöst. Kein signifikanter Exzess von Neutrino-Blitzen wird gefunden. Entsprechend werden obere Grenzen auf das Jet-Supernova-Modell berechnet. Für die Modellparameter Lorentzfaktor Γjet = 10 und kinetische Energie des Jets Ejet = 3 × 1051 erg sind höchstens etwa 8% aller Kernkollaps-Supernovae als Träger eines solchen Jets mit den Daten verträglich

    PACOME: Optimal multi-epoch combination of direct imaging observations for joint exoplanet detection and orbit estimation

    Full text link
    Exoplanet detections and characterizations via direct imaging require high contrast and high angular resolution. These requirements typically require (i) cutting-edge instrumental facilities, (ii) optimized differential imaging to introduce a diversity in the signals of the sought-for objects, and (iii) dedicated processing algorithms to further eliminate the residual stellar leakages. Substantial efforts have been undertaken on the design of more efficient post-processing algorithms but their performance remains upper-bounded at shorter angular separations due to the the lack of diversity induced by the processing of each epoch of observations individually. We propose a new algorithm that is able to combine several observations of the same star by accounting for the Keplerian orbital motion across epochs of the sought-for sources in order to constructively co-add their weak signals. The proposed algorithm, PACOME, integrates an exploration of the plausible orbits within a statistical detection and estimation formalism. It is extended to a multi-epoch combination of the maximum likelihood framework of PACO, which is a mono-epoch post-processing algorithm. We derive a reliable multi-epoch detection criterion, interpretable both in terms of probability of detection and of false alarm. We tested the proposed algorithm on several datasets obtained from the VLT/SPHERE instrument with IRDIS and IFS. By resorting to injections of synthetic exoplanets, we show that PACOME is able to detect sources remaining undetectable in mono-epoch frameworks. The gain in detection sensitivity scales as high as the square root of the number of epochs. We also applied PACOME on a set of observations from the HR 8799 star hosting four known exoplanets, which are detected with very high signal-to-noise ratios. In addition, its implementation is efficient, fast, and fully automatized.Comment: Accepted for publication in A&

    New contributions to algorithms and tools for the analysis of photometric and spectroscopic time-series in exoplanet searches

    Get PDF
    [eng] The current trend in exoplanet research focuses on the detection and characterisation of Earth-sized planets, and the study of their potential subtle and tenuous atmospheres. The aim of this thesis is the development of tools and simulation codes for the detection and characterisation of exoplanets by means of the indirect methods of radial velocities and transits. The structure of the thesis is two-fold. Firstly, we present a multidimensional extension to the well-known period search GLS code, which we dub MGLS (Multidimensional Generalized Lomb-Scargle periodogram). The analysis of a time-series periodogram of radial velocity data is the usual starting point to seek for periodic signals which then can be associated with the reflex Keplerian motion of a star caused by orbiting exoplanets. In the case of multiplanetary systems such analysis is usually carried out in an iterative fashion, known as prewhitening. This approach can diminish the significance and distort the parameters of periodic signals, and we aim to solve those limitations by introducing a multidimensional approach. Additionally, a robust criterion to determine the number of signals (dimensionality) in a time-series is presented. The new approach is more flexible and enhances the significance of multisignal detections and their multiplicity. It is further better capable to pinpoint the fit parameters and is able to compare models of different dimensionality. The MGLS code has been tested with real multiplanetary systems, showing its excellent performance in detectability. The code is publicly available to the community. The second part addresses the effects of rotationally-induced stellar activity on the photometric and spectroscopic observables. The properties, distribution, and evolution of inhomogeneities on the surface of active stars, such as dark spots and bright faculae, significantly influence the determination of the parameters of an orbiting exoplanet. The chromatic effect they have on transmission spectroscopy, for example, could affect the analysis of data from future space missions such as James Webb Space Telescope (JWST) and Ariel. To quantify and mitigate the effects of those surface phenomena, we developed a fast modelling approach to derive the surface distribution and properties of active regions by modelling simultaneous multi-wavelength time-series observables. We present an upgraded version of the StarSim code, now featuring the capability to solve the inverse problem and derive the properties of the stars and their active regions by modelling time-series data. The multiband photometric inverse problem is both analytically and numerically discussed, as well as a broad analysis of the degeneracies found in the inversion process. As a test case, we analyse a BVRI multiband ground photometry dataset of the exoplanet host star WASP-52. From the results, we further simulated the chromatic contribution of surface phenomena on the observables of its transiting planet. We demonstrate that by using contemporaneous ground-based multiband photometry of an active star, it is possible to reconstruct the parameters and distribution of active regions over time, thus making it feasible to quantify the chromatic effects on the planetary radii measured with transit spectroscopy and mitigate them by about an order of magnitude. The obtained results show it is possible to accurately characterise the heterogeneous stellar surface up to a precision of a few parts in 10^5 and validate the scientific case of space missions like Ariel, designed for exoplanetary transmission spectroscopy.[cat] L'interès actual en la recerca en exoplanetes rau en la detecció de planetes cada cop més petits on els senyals són estadísticament poc significatius, particularment en l'estudi de les seves atmosferes. La tesi té com a objectiu el disseny i desenvolupament d'eines i codis sofisticats per a la detecció i caracterització de les propietats d'exoplanetes mitjançant l'ús de les tècniques de velocitat radial i trànsits, i es compon de dues parts diferenciades: la primera, tracta la generalització d'una eina de detecció de periodicitats molt popular en aquest camp (GLS). S'ha desenvolupat una versió multidimensional, que anomenem MGLS (Multidimensional Generalized Lomb-Scargle periodogram), que permet l'ajust simultani d'un nombre arbitrari de senyals, de manera que millora notablement la detectabilitat de senyals compostos, i evita els problemes derivats del filtratge quan es fa servir el procediment seqüencial, com falsos positius/negatius. Addicionalment es presenta un procediment robust per a la determinació del nombre de senyals (dimensionalitat). La segona part, tracta els efectes de l'activitat estel·lar sobre les mesures de velocitat radial i fotometria. L'activitat magnètica superficial en forma de taques i fàcules constitueix superfícies heterogènies, que amb la rotació de l'estrella, produeixen variacions d'intensitat i cromàtiques en els observables. S'ha desenvolupat un codi ràpid per a la modelització física de l'activitat induïda per rotació de taques, prenent com a base una versió de codi preexistent. En la versió StarSim 2, permet dur a terme el problema invers per determinar l'estat de la superfície més probable donades unes observacions fotomètriques. També es desenvolupa una formulació analítica per al problema invers multibanda i s'analitzen detalladament les degeneracions existents en el problema. L'esquema d'inversió i el codi s'aplica a un conjunt de dades multifiltre (BVRI) de l'estrella WASP-52, i es simulen els efectes cromàtics del model d'activitat ajustat sobre els seus trànsits, com procedir per corregir-los per assolir una precisió d'unes poques parts en cent mil, i per tant validar el cas científic que sustenta la missió Ariel per a l'anàlisi d'atmosferes exoplanetàries per espectroscòpia de transmissió

    Investigation of Non-coherent Discrete Target Range Estimation Techniques for High-precision Location

    Get PDF
    Ranging is an essential and crucial task for radar systems. How to solve the range-detection problem effectively and precisely is massively important. Meanwhile, unambiguity and high resolution are the points of interest as well. Coherent and non-coherent techniques can be applied to achieve range estimation, and both of them have advantages and disadvantages. Coherent estimates offer higher precision but are more vulnerable to noise and clutter and phase wrap errors, particularly in a complex or harsh environment, while the non-coherent approaches are simpler but provide lower precision. With the purpose of mitigating inaccuracy and perturbation in range estimation, miscellaneous techniques are employed to achieve optimally precise detection. Numerous elegant processing solutions stemming from non-coherent estimate are now introduced into the coherent realm, and vice versa. This thesis describes two non-coherent ranging estimate techniques with novel algorithms to mitigate the instinct deficit of non-coherent ranging approaches. One technique is based on peak detection and realised by Kth-order Polynomial Interpolation, while another is based on Z-transform and realised by Most-likelihood Chirp Z-transform. A two-stage approach for the fine ranging estimate is applied to the Discrete Fourier transform domain of both algorithms. An N-point Discrete Fourier transform is implemented to attain a coarse estimation; an accurate process around the point of interest determined in the first stage is conducted. For KPI technique, it interpolates around the peak of Discrete Fourier transform profiles of the chirp signal to achieve accurate interpolation and optimum precision. For Most-likelihood Chirp Z-transform technique, the Chirp Z-transform accurately implements the periodogram where only a narrow band spectrum is processed. Furthermore, the concept of most-likelihood estimator is introduced to combine with Chirp Z-transform to acquire better ranging performance. Cramer-Rao lower bound is presented to evaluate the performance of these two techniques from the perspective of statistical signal processing. Mathematical derivation, simulation modelling, theoretical analysis and experimental validation are conducted to assess technique performance. Further research will be pushed forward to algorithm optimisation and system development of a location system using non-coherent techniques and make a comparison to a coherent approach

    Performant astronomical image processing with Python

    Get PDF
    Image processing is fundamental to observational astronomy workflows. Astronomers acquire imaging data, and process the imagery to extract useful information. This thesis introduces two new image processing algorithms. The first, PyTorchDIA, is a GPU-accelerated approach to Difference Image Analysis (DIA). The approach is fast, without sacrificing modelling flexibility. It makes use of the Pythonic, PyTorch machine learning framework to accelerate convolution computations on the GPU, and compute gradients of user-specified objective functions with automatic differentiation methods to fit DIA models quickly and accurately. The second algorithm, The Thresher, was designed as a new tool to extracting information from Lucky Imaging (LI) data sets. We adopt a modelling approach which optimises a justifiable, physically motivated likelihood function to return the best estimate of the observed astronomical scene. It does this using all available data, and the more data the model is fit to, the better the signal-to-noise and resolution of the scene estimate. This fundamentally differs from conventional shift-and-add procedures, which typically reject the vast majority of the acquired LI data, as in these approaches, the signal-to-noise of the final coadd is inversely related to its resolution. With an eye to accessibility, integration into workflows and open science, the code for these two algorithms has been open sourced. Lastly, we show how Python image processing applications can be used to realise time-critical, demanding computational challenges in a chapter outlining the results of a novel pilot study to detect the occultations of background stars by small, outer solar system objects with high frame-rate sCMOS cameras

    Nonparametric inference with directional and linear data

    Get PDF
    The term directional data refers to data whose support is a circumference, a sphere or, generally, an hypersphere of arbitrary dimension. This kind of data appears naturally in several applied disciplines: proteomics, environmental sciences, biology, astronomy, image analysis or text mining. The aim of this thesis is to provide new methodological tools for nonparametric inference with directional and linear data (i.e., usual Euclidean data). Nonparametric methods are obtained for both estimation and testing, for the density and the regression curves, in situations where directional random variables are present, that is, directional, directional-linear and directional-directional random variables. The main contributions of the thesis are collected in six papers briefly described in what follows. In García-Portugués et al. (2013a) different ways of estimating circular-linear and circularcircular densities via copulas are explored for an environmental application. A new directionallinear kernel density estimator is introduced in García-Portugués et al. (2013b) together with its basic properties. Three new bandwidth selectors for the kernel density estimator with directional data are given in García-Portugués (2013) and compared with the available ones. The directional-linear estimator is used in García-Portugués et al. (2014a) for constructing an independence test for directional and linear variables that is applied to study the dependence between wildfire orientation and size. In García-Portugués et al. (2014b) a central limit theorem for the integrated squared error of the directional-linear estimator is presented. This result is used to derive the asymptotic distribution of the independence test and of a goodness-of-fit test for parametric directional-linear and directional-directional densities. Finally, a local linear estimator with directional predictor and linear response is given in García-Portugués et al. (2014) jointly with a goodness-of-fit test for parametric regression functions

    Abstracts on Radio Direction Finding (1899 - 1995)

    Get PDF
    The files on this record represent the various databases that originally composed the CD-ROM issue of "Abstracts on Radio Direction Finding" database, which is now part of the Dudley Knox Library's Abstracts and Selected Full Text Documents on Radio Direction Finding (1899 - 1995) Collection. (See Calhoun record https://calhoun.nps.edu/handle/10945/57364 for further information on this collection and the bibliography). Due to issues of technological obsolescence preventing current and future audiences from accessing the bibliography, DKL exported and converted into the three files on this record the various databases contained in the CD-ROM. The contents of these files are: 1) RDFA_CompleteBibliography_xls.zip [RDFA_CompleteBibliography.xls: Metadata for the complete bibliography, in Excel 97-2003 Workbook format; RDFA_Glossary.xls: Glossary of terms, in Excel 97-2003 Workbookformat; RDFA_Biographies.xls: Biographies of leading figures, in Excel 97-2003 Workbook format]; 2) RDFA_CompleteBibliography_csv.zip [RDFA_CompleteBibliography.TXT: Metadata for the complete bibliography, in CSV format; RDFA_Glossary.TXT: Glossary of terms, in CSV format; RDFA_Biographies.TXT: Biographies of leading figures, in CSV format]; 3) RDFA_CompleteBibliography.pdf: A human readable display of the bibliographic data, as a means of double-checking any possible deviations due to conversion
    corecore