9,303 research outputs found
CIGALEMC: Galaxy Parameter Estimation using a Markov Chain Monte Carlo Approach with Cigale
We introduce a fast Markov Chain Monte Carlo (MCMC) exploration of the
astrophysical parameter space using a modified version of the publicly
available code CIGALE (Code Investigating GALaxy emission). The original CIGALE
builds a grid of theoretical Spectral Energy Distribution (SED) models and fits
to photometric fluxes from Ultraviolet (UV) to Infrared (IR) to put contraints
on parameters related to both formation and evolution of galaxies. Such a
grid-based method can lead to a long and challenging parameter extraction since
the computation time increases exponentially with the number of parameters
considered and results can be dependent on the density of sampling points,
which must be chosen in advance for each parameter. Markov Chain Monte Carlo
methods, on the other hand, scale approximately linearly with the number of
parameters, allowing a faster and more accurate exploration of the parameter
space by using a smaller number of efficiently chosen samples. We test our MCMC
version of the code CIGALE (called CIGALEMC) with simulated data. After
checking the ability of the code to retrieve the input parameters used to build
the mock sample, we fit theoretical SEDs to real data from the well known and
studied SINGS sample. We discuss constraints on the parameters and show the
advantages of our MCMC sampling method in terms of accuracy of the results and
optimization of CPU time.Comment: 12 pages, 8 figures, 4 tables, updated to match the version accepted
for publication in ApJ; code available at http://www.oamp.fr/cigale
Delayed Sampling and Automatic Rao-Blackwellization of Probabilistic Programs
We introduce a dynamic mechanism for the solution of analytically-tractable
substructure in probabilistic programs, using conjugate priors and affine
transformations to reduce variance in Monte Carlo estimators. For inference
with Sequential Monte Carlo, this automatically yields improvements such as
locally-optimal proposals and Rao-Blackwellization. The mechanism maintains a
directed graph alongside the running program that evolves dynamically as
operations are triggered upon it. Nodes of the graph represent random
variables, edges the analytically-tractable relationships between them. Random
variables remain in the graph for as long as possible, to be sampled only when
they are used by the program in a way that cannot be resolved analytically. In
the meantime, they are conditioned on as many observations as possible. We
demonstrate the mechanism with a few pedagogical examples, as well as a
linear-nonlinear state-space model with simulated data, and an epidemiological
model with real data of a dengue outbreak in Micronesia. In all cases one or
more variables are automatically marginalized out to significantly reduce
variance in estimates of the marginal likelihood, in the final case
facilitating a random-weight or pseudo-marginal-type importance sampler for
parameter estimation. We have implemented the approach in Anglican and a new
probabilistic programming language called Birch.Comment: 13 pages, 4 figure
Constraints On The Topology Of The Universe From The WMAP First-Year Sky Maps
We compute the covariance expected between the spherical harmonic
coefficients of the cosmic microwave temperature anisotropy if the
universe had a compact topology. For fundamental cell size smaller than the
distance to the decoupling surface, off-diagonal components carry more
information than the diagonal components (the power spectrum). We use a maximum
likelihood analysis to compare the Wilkinson Microwave Anisotropy Probe
first-year data to models with a cubic topology. The data are compatible with
finite flat topologies with fundamental domain times the distance to
the decoupling surface at 95% confidence. The WMAP data show reduced power at
the quadrupole and octopole, but do not show the correlations expected for a
compact topology and are indistinguishable from infinite models.Comment: 16 pages, 5 figure
Parameter estimation on gravitational waves from multiple coalescing binaries
Future ground-based and space-borne interferometric gravitational-wave
detectors may capture between tens and thousands of binary coalescence events
per year. There is a significant and growing body of work on the estimation of
astrophysically relevant parameters, such as masses and spins, from the
gravitational-wave signature of a single event. This paper introduces a robust
Bayesian framework for combining the parameter estimates for multiple events
into a parameter distribution of the underlying event population. The framework
can be readily deployed as a rapid post-processing tool
Constraining the luminosity function parameters and population size of radio pulsars in globular clusters
Studies of the Galactic population of radio pulsars have shown that their
luminosity distribution appears to be log-normal in form. We investigate some
of the consequences that occur when one applies this functional form to
populations of pulsars in globular clusters. We use Bayesian methods to explore
constraints on the mean and standard deviation of the luminosity function, as
well as the total number of pulsars, given an observed sample of pulsars down
to some limiting flux density, accounting for measurements of flux densities of
individual pulsars as well as diffuse emission from the direction of the
cluster. We apply our analysis to Terzan 5, 47 Tucanae and M 28, and
demonstrate, under reasonable assumptions, that the number of potentially
observable pulsars should be within 95% credible intervals of
, and , respectively.
Beaming considerations would increase the true population size by approximately
a factor of two. Using non-informative priors, however, the constraints are not
tight due to the paucity and quality of flux density measurements. Future
cluster pulsar discoveries and improved flux density measurements would allow
this method to be used to more accurately constrain the luminosity function,
and to compare the luminosity function between different clusters.Comment: 9 pages, 4 figures, Accepted for publication in MNRA
Reconstructing the massive black hole cosmic history through gravitational waves
The massive black holes we observe in galaxies today are the natural
end-product of a complex evolutionary path, in which black holes seeded in
proto-galaxies at high redshift grow through cosmic history via a sequence of
mergers and accretion episodes. Electromagnetic observations probe a small
subset of the population of massive black holes (namely, those that are active
or those that are very close to us), but planned space-based gravitational-wave
observatories such as the Laser Interferometer Space Antenna (LISA) can measure
the parameters of ``electromagnetically invisible'' massive black holes out to
high redshift. In this paper we introduce a Bayesian framework to analyze the
information that can be gathered from a set of such measurements. Our goal is
to connect a set of massive black hole binary merger observations to the
underlying model of massive black hole formation. In other words, given a set
of observed massive black hole coalescences, we assess what information can be
extracted about the underlying massive black hole population model. For
concreteness we consider ten specific models of massive black hole formation,
chosen to probe four important (and largely unconstrained) aspects of the input
physics used in structure formation simulations: seed formation, metallicity
``feedback'', accretion efficiency and accretion geometry. For the first time
we allow for the possibility of ``model mixing'', by drawing the observed
population from some combination of the ``pure'' models that have been
simulated. A Bayesian analysis allows us to recover a posterior probability
distribution for the ``mixing parameters'' that characterize the fractions of
each model represented in the observed distribution. Our work shows that LISA
has enormous potential to probe the underlying physics of structure formation.Comment: 24 pages, 16 figures, submitted to Phys. Rev.
Basic Parameter Estimation of Binary Neutron Star Systems by the Advanced LIGO/Virgo Network
Within the next five years, it is expected that the Advanced LIGO/Virgo
network will have reached a sensitivity sufficient to enable the routine
detection of gravitational waves. Beyond the initial detection, the scientific
promise of these instruments relies on the effectiveness of our physical
parameter estimation capabilities. The majority of this effort has been towards
the detection and characterization of gravitational waves from compact binary
coalescence, e.g. the coalescence of binary neutron stars. While several
previous studies have investigated the accuracy of parameter estimation with
advanced detectors, the majority have relied on approximation techniques such
as the Fisher Matrix. Here we report the statistical uncertainties that will be
achievable for optimal detection candidates (SNR = 20) using the full parameter
estimation machinery developed by the LIGO/Virgo Collaboration via Markov-Chain
Monte Carlo methods. We find the recovery of the individual masses to be
fractionally within 9% (15%) at the 68% (95%) credible intervals for equal-mass
systems, and within 1.9% (3.7%) for unequal-mass systems. We also find that the
Advanced LIGO/Virgo network will constrain the locations of binary neutron star
mergers to a median uncertainty of 5.1 deg^2 (13.5 deg^2) on the sky. This
region is improved to 2.3 deg^2 (6 deg^2) with the addition of the proposed
LIGO India detector to the network. We also report the average uncertainties on
the luminosity distances and orbital inclinations of ideal detection candidates
that can be achieved by different network configurations.Comment: Second version: 15 pages, 9 figures, accepted in Ap
- …