562 research outputs found

    Adaptive Langevin Sampler for Separation of t-Distribution Modelled Astrophysical Maps

    Full text link
    We propose to model the image differentials of astrophysical source maps by Student's t-distribution and to use them in the Bayesian source separation method as priors. We introduce an efficient Markov Chain Monte Carlo (MCMC) sampling scheme to unmix the astrophysical sources and describe the derivation details. In this scheme, we use the Langevin stochastic equation for transitions, which enables parallel drawing of random samples from the posterior, and reduces the computation time significantly (by two orders of magnitude). In addition, Student's t-distribution parameters are updated throughout the iterations. The results on astrophysical source separation are assessed with two performance criteria defined in the pixel and the frequency domains.Comment: 12 pages, 6 figure

    Blind source separation from multi-channel observations with channel-variant spatial resolutions

    Get PDF
    We propose a Bayesian method for separation and reconstruction of multiple source images from multi-channel observations with different resolutions and sizes. We reconstruct the sources by exploiting each observation channel at its exact resolution and size. The source maps are estimated by sampling the posteriors through a Monte Carlo scheme driven by an adaptive Langevin sampler. We use the t-distribution as prior image model. All the parameters of the posterior distribution are estimated iteratively along the algorithm. We experimented the proposed technique with the simulated astrophysical observations. These data are normally characterized by their channel-variant spatial resolution. Unlike most of the spatial-domain separation methods proposed so far, our strategy allows us to exploit each channel map at its exact resolution and size.The authors would like to thank Anna Bonaldi,(INAF, Padova, Italy) and Bulent Sankur, (Bogazici University, Turkey) for their valuable discussions. The simulated maps are courtesy of the Planck working group on diffuse component separation (WG2.1)

    A Bayesian approach to discrete object detection in astronomical datasets

    Full text link
    A Bayesian approach is presented for detecting and characterising the signal from discrete objects embedded in a diffuse background. The approach centres around the evaluation of the posterior distribution for the parameters of the discrete objects, given the observed data, and defines the theoretically-optimal procedure for parametrised object detection. Two alternative strategies are investigated: the simultaneous detection of all the discrete objects in the dataset, and the iterative detection of objects. In both cases, the parameter space characterising the object(s) is explored using Markov-Chain Monte-Carlo sampling. For the iterative detection of objects, another approach is to locate the global maximum of the posterior at each iteration using a simulated annealing downhill simplex algorithm. The techniques are applied to a two-dimensional toy problem consisting of Gaussian objects embedded in uncorrelated pixel noise. A cosmological illustration of the iterative approach is also presented, in which the thermal and kinetic Sunyaev-Zel'dovich effects from clusters of galaxies are detected in microwave maps dominated by emission from primordial cosmic microwave background anisotropies.Comment: 20 pages, 12 figures, accepted by MNRAS; contains some additional material in response to referee's comment

    The Effect of Scintillation on Ground-Based Exoplanet Transit Photometry

    Get PDF
    In this thesis, the effect of scintillation arising from atmospheric optical turbulence on exoplanet transit and secondary eclipse photometry is examined. Atmospheric scintillation arises from the propagation of phase aberrations resulting from wavefront perturbations due to optical turbulence high in the atmosphere. Scintillation causes intensity variations of astronomical targets, which is a problem in exoplanet transit photometry, where the measurement of a decrease in brightness of 1% or less is required. For this reason, ground-based telescopes have inferior photometric precision compared to their space-based counterparts, despite having the advantage of a reduced cost. In contrast with previous work on the detection limits of fast photometry, which is obtained for an atmosphere averaged over time, the actual scintillation noise can vary considerably from night to night depending on the magnitude of the high-altitude turbulence. From simulation of turbulent layers, the regimes where scintillation is the dominant source of noise on photometry are presented. These are shown to be in good agreement with the analytical, layer based, equations for scintillation. Through Bayesian analysis, the relationship between the errors on the light and the uncertainties on the astrophysical parameters are examined. The errors on the light curve arising from scintillation linearly increase the scatter on the astrophysical parameters with a gradient in the range of 0.68 -0.80. The noise due to the photometry aperture is investigated. It is found that for short exposure in times in good seeing, speckle noise contributes to noise in photometry for aperture sizes of up to approximately 2.3xFWHM. The results from simultaneous turbulence profiling and time-series photometry are presented. It is found that turbulence profiling can be used to accurately predict the amount of scintillation noise present in photometric observations. An investigation of the secondary eclipse of WASP-12b on the William Herschel Telescope (WHT) is performed, resulting in a high quality z’-band light curve for WASP-12b consistent with a carbon-rich model and with no evidence for strong thermal inversion

    Algorithms for approximate Bayesian inference with applications to astronomical data analysis

    Get PDF
    Bayesian inference is a theoretically well-founded and conceptually simple approach to data analysis. The computations in practical problems are anything but simple though, and thus approximations are almost always a necessity. The topic of this thesis is approximate Bayesian inference and its applications in three intertwined problem domains. Variational Bayesian learning is one type of approximate inference. Its main advantage is its computational efficiency compared to the much applied sampling based methods. Its main disadvantage, on the other hand, is the large amount of analytical work required to derive the necessary components for the algorithm. One part of this thesis reports on an effort to automate variational Bayesian learning of a certain class of models. The second part of the thesis is concerned with heteroscedastic modelling which is synonymous to variance modelling. Heteroscedastic models are particularly suitable for the Bayesian treatment as many of the traditional estimation methods do not produce satisfactory results for them. In the thesis, variance models and algorithms for estimating them are studied in two different contexts: in source separation and in regression. Astronomical applications constitute the third part of the thesis. Two problems are posed. One is concerned with the separation of stellar subpopulation spectra from observed galaxy spectra; the other is concerned with estimating the time-delays in gravitational lensing. Solutions to both of these problems are presented, which heavily rely on the machinery of approximate inference

    The environment and host haloes of the brightest z~6 Lyman-break galaxies

    Get PDF
    By studying the large-scale structure of the bright high-redshift Lyman-break galaxy (LBG) population it is possible to gain an insight into the role of environment in galaxy formation physics in the early Universe. We measure the clustering of a sample of bright (-22.7<M_UV<-21.125) LBGs at z~6 and use a halo occupation distribution (HOD) model to measure their typical halo masses. We find that the clustering amplitude and corresponding HOD fits suggests that these sources are highly biased (b~8) objects in the densest regions of the high-redshift Universe. Coupled with the observed rapid evolution of the number density of these objects, our results suggest that the shape of high luminosity end of the luminosity function is related to feedback processes or dust obscuration in the early Universe - as opposed to a scenario where these sources are predominantly rare instances of the much more numerous M_UV ~ -19 population of galaxies caught in a particularly vigorous period of star formation. There is a slight tension between the number densities and clustering measurements, which we interpret this as a signal that a refinement of the model halo bias relation at high redshifts or the incorporation of quasi-linear effects may be needed for future attempts at modelling the clustering and number counts. Finally, the difference in number density between the fields (UltraVISTA has a surface density ~1.8 times greater than UDS) is shown to be consistent with the cosmic variance implied by the clustering measurements.Comment: 19 pages, 8 figures, accepted MNRAS 23rd March 201

    Approximate inference in astronomy

    Get PDF
    This thesis utilizes the rules of probability theory and Bayesian reasoning to perform inference about astrophysical quantities from observational data, with a main focus on the inference of dynamical systems extended in space and time. The necessary assumptions to successfully solve such inference problems in practice are discussed and the resulting methods are applied to real world data. These assumptions range from the simplifying prior assumptions that enter the inference process up to the development of a novel approximation method for resulting posterior distributions. The prior models developed in this work follow a maximum entropy principle by solely constraining those physical properties of a system that appear most relevant to inference, while remaining uninformative regarding all other properties. To this end, prior models that only constrain the statistically homogeneous space-time correlation structure of a physical observable are developed. The constraints placed on these correlations are based on generic physical principles, which makes the resulting models quite flexible and allows for a wide range of applications. This flexibility is verified and explored using multiple numerical examples, as well as an application to data provided by the Event Horizon Telescope about the center of the galaxy M87. Furthermore, as an advanced and extended form of application, a variant of these priors is utilized within the context of simulating partial differential equations. Here, the prior is used in order to quantify the physical plausibility of an associated numerical solution, which in turn improves the accuracy of the simulation. The applicability and implications of this probabilistic approach to simulation are discussed and studied using numerical examples. Finally, utilizing such prior models paired with the vast amount of observational data provided by modern telescopes, results in Bayesian inference problems that are typically too complex to be fully solvable analytically. Specifically, most resulting posterior probability distributions become too complex, and therefore require a numerical approximation via a simplified distribution. To improve upon existing methods, this work proposes a novel approximation method for posterior probability distributions: the geometric Variational Inference (geoVI) method. The approximation capacities of geoVI are theoretically established and demonstrated using numerous numerical examples. These results suggest a broad range of applicability as the method provides a decrease in approximation errors compared to state of the art methods at a moderate level of computational costs.Diese Dissertation verwendet die Regeln der Wahrscheinlichkeitstheorie und Bayes’scher Logik, um astrophysikalische GrĂ¶ĂŸen aus Beobachtungsdaten zu rekonstruieren, mit einem Schwerpunkt auf der Rekonstruktion von dynamischen Systemen, die in Raum und Zeit definiert sind. Es werden die Annahmen, die notwendig sind um solche Inferenz-Probleme in der Praxis erfolgreich zu lösen, diskutiert, und die resultierenden Methoden auf reale Daten angewendet. Diese Annahmen reichen von vereinfachenden Prior-Annahmen, die in den Inferenzprozess eingehen, bis hin zur Entwicklung eines neuartigen Approximationsverfahrens fĂŒr resultierende Posterior-Verteilungen. Die in dieser Arbeit entwickelten Prior-Modelle folgen einem Prinzip der maximalen Entropie, indem sie nur die physikalischen Eigenschaften eines Systems einschrĂ€nken, die fĂŒr die Inferenz am relevantesten erscheinen, wĂ€hrend sie bezĂŒglich aller anderen Eigenschaften agnostisch bleiben. Zu diesem Zweck werden Prior-Modelle entwickelt, die nur die statistisch homogene Raum-Zeit-Korrelationsstruktur einer physikalischen Observablen einschrĂ€nken. Die gewĂ€hlten Bedingungen an diese Korrelationen basieren auf generischen physikalischen Prinzipien, was die resultierenden Modelle sehr flexibel macht und ein breites Anwendungsspektrum ermöglicht. Dies wird anhand mehrerer numerischer Beispiele sowie einer Anwendung auf Daten des Event Horizon Telescope ĂŒber das Zentrum der Galaxie M87 verifiziert und erforscht. DarĂŒber hinaus wird als erweiterte Anwendungsform eine Variante dieser Modelle zur Simulation partieller Differentialgleichungen verwendet. Hier wird der Prior als Vorwissen benutzt, um die physikalische PlausibilitĂ€t einer zugehörigen numerischen Lösung zu quantifizieren, was wiederum die Genauigkeit der Simulation verbessert. Die Anwendbarkeit und Implikationen dieses probabilistischen Simulationsansatzes werden diskutiert und anhand von numerischen Beispielen untersucht. Die Verwendung solcher Prior-Modelle, gepaart mit der riesigen Menge an Beobachtungsdaten moderner Teleskope, fĂŒhrt typischerweise zu Inferenzproblemen die zu komplex sind um vollstĂ€ndig analytisch lösbar zu sein. Insbesondere ist fĂŒr die meisten resultierenden Posterior-Wahrscheinlichkeitsverteilungen eine numerische NĂ€herung durch eine vereinfachte Verteilung notwendig. Um bestehende Methoden zu verbessern, schlĂ€gt diese Arbeit eine neuartige NĂ€herungsmethode fĂŒr Wahrscheinlichkeitsverteilungen vor: Geometric Variational Inference (geoVI). Die ApproximationsfĂ€higkeiten von geoVI werden theoretisch ermittelt und anhand numerischer Beispiele demonstriert. Diese Ergebnisse legen einen breiten Anwendungsbereich nahe, da das Verfahren bei moderaten Rechenkosten eine Verringerung des NĂ€herungsfehlers im Vergleich zum Stand der Technik liefert

    Accelerating inference in cosmology and seismology with generative models

    Get PDF
    Statistical analyses in many physical sciences require running simulations of the system that is being examined. Such simulations provide complementary information to the theoretical analytic models, and represent an invaluable tool to investigate the dynamics of complex systems. However, running simulations is often computationally expensive, and the high number of required mocks to obtain sufficient statistical precision often makes the problem intractable. In recent years, machine learning has emerged as a possible solution to speed up the generation of scientific simulations. Machine learning generative models usually rely on iteratively feeding some true simulations to the algorithm, until it learns the important common features and is capable of producing accurate simulations in a fraction of the time. In this thesis, advanced machine learning algorithms are explored and applied to the challenge of accelerating physical simulations. Various techniques are applied to problems in cosmology and seismology, showing benefits and limitations of such an approach through a critical analysis. The algorithms are applied to compelling problems in the fields, including surrogate models for the seismic wave equation, the emulation of cosmological summary statistics, and the fast generation of large simulations of the Universe. These problems are formulated within a relevant statistical framework, and tied to real data analysis pipelines. In the conclusions, a critical overview of the results is provided, together with an outlook over possible future expansions of the work presented in the thesis

    Planetary Candidates Observed by Kepler. VIII. A Fully Automated Catalog With Measured Completeness and Reliability Based on Data Release 25

    Get PDF
    We present the Kepler Object of Interest (KOI) catalog of transiting exoplanets based on searching four years of Kepler time series photometry (Data Release 25, Q1-Q17). The catalog contains 8054 KOIs of which 4034 are planet candidates with periods between 0.25 and 632 days. Of these candidates, 219 are new and include two in multi-planet systems (KOI-82.06 and KOI-2926.05), and ten high-reliability, terrestrial-size, habitable zone candidates. This catalog was created using a tool called the Robovetter which automatically vets the DR25 Threshold Crossing Events (TCEs, Twicken et al. 2016). The Robovetter also vetted simulated data sets and measured how well it was able to separate TCEs caused by noise from those caused by low signal-to-noise transits. We discusses the Robovetter and the metrics it uses to sort TCEs. For orbital periods less than 100 days the Robovetter completeness (the fraction of simulated transits that are determined to be planet candidates) across all observed stars is greater than 85%. For the same period range, the catalog reliability (the fraction of candidates that are not due to instrumental or stellar noise) is greater than 98%. However, for low signal-to-noise candidates between 200 and 500 days around FGK dwarf stars, the Robovetter is 76.7% complete and the catalog is 50.5% reliable. The KOI catalog, the transit fits and all of the simulated data used to characterize this catalog are available at the NASA Exoplanet Archive.Comment: 61 pages, 23 Figures, 9 Tables, Accepted to The Astrophysical Journal Supplement Serie
    • 

    corecore