48,474 research outputs found
Weighing simulated galaxy clusters using lensing and X-ray
We aim at investigating potential biases in lensing and X-ray methods to
measure the cluster mass profiles. We do so by performing realistic simulations
of lensing and X-ray observations that are subsequently analyzed using
observational techniques. The resulting mass estimates are compared among them
and with the input models. Three clusters obtained from state-of-the-art
hydrodynamical simulations, each of which has been projected along three
independent lines-of-sight, are used for this analysis. We find that strong
lensing models can be trusted over a limited region around the cluster core.
Extrapolating the strong lensing mass models to outside the Einstein ring can
lead to significant biases in the mass estimates, if the BCG is not modeled
properly for example. Weak lensing mass measurements can be largely affected by
substructures, depending on the method implemented to convert the shear into a
mass estimate. Using non-parametric methods which combine weak and strong
lensing data, the projected masses within R200 can be constrained with a
precision of ~10%. De-projection of lensing masses increases the scatter around
the true masses by more than a factor of two due to cluster triaxiality. X-ray
mass measurements have much smaller scatter (about a factor of two smaller than
the lensing masses) but they are generally biased low by 5-20%. This bias is
ascribable to bulk motions in the gas of our simulated clusters. Using the
lensing and the X-ray masses as proxies for the true and the hydrostatic
equilibrium masses of the simulated clusters and averaging over the cluster
sample we are able to measure the lack of hydrostatic equilibrium in the
systems we have investigated.Comment: 27 pages, 21 figures, accepted for publication on A&A. Version with
full resolution images can be found at
http://pico.bo.astro.it/~massimo/Public/Papers/massComp.pd
Inferring the photometric and size evolution of galaxies from image simulations
Current constraints on models of galaxy evolution rely on morphometric
catalogs extracted from multi-band photometric surveys. However, these catalogs
are altered by selection effects that are difficult to model, that correlate in
non trivial ways, and that can lead to contradictory predictions if not taken
into account carefully. To address this issue, we have developed a new approach
combining parametric Bayesian indirect likelihood (pBIL) techniques and
empirical modeling with realistic image simulations that reproduce a large
fraction of these selection effects. This allows us to perform a direct
comparison between observed and simulated images and to infer robust
constraints on model parameters. We use a semi-empirical forward model to
generate a distribution of mock galaxies from a set of physical parameters.
These galaxies are passed through an image simulator reproducing the
instrumental characteristics of any survey and are then extracted in the same
way as the observed data. The discrepancy between the simulated and observed
data is quantified, and minimized with a custom sampling process based on
adaptive Monte Carlo Markov Chain methods. Using synthetic data matching most
of the properties of a CFHTLS Deep field, we demonstrate the robustness and
internal consistency of our approach by inferring the parameters governing the
size and luminosity functions and their evolutions for different realistic
populations of galaxies. We also compare the results of our approach with those
obtained from the classical spectral energy distribution fitting and
photometric redshift approach.Our pipeline infers efficiently the luminosity
and size distribution and evolution parameters with a very limited number of
observables (3 photometric bands). When compared to SED fitting based on the
same set of observables, our method yields results that are more accurate and
free from systematic biases.Comment: 24 pages, 12 figures, accepted for publication in A&
HST Scattered Light Imaging and Modeling of the Edge-on Protoplanetary Disk ESO-H 569
We present new HST ACS observations and detailed models for a recently
discovered edge-on protoplanetary disk around ESO H 569 (a low-mass T
Tauri star in the Cha I star forming region). Using radiative transfer models
we probe the distribution of the grains and overall shape of the disk
(inclination, scale height, dust mass, flaring exponent and surface/volume
density exponent) by model fitting to multiwavelength (F606W and F814W) HST
observations together with a literature compiled spectral energy distribution.
A new tool set was developed for finding optimal fits of MCFOST radiative
transfer models using the MCMC code emcee to efficiently explore the high
dimensional parameter space. It is able to self-consistently and simultaneously
fit a wide variety of observables in order to place constraints on the physical
properties of a given disk, while also rigorously assessing the uncertainties
in those derived properties. We confirm that ESO H 569 is an optically
thick nearly edge-on protoplanetary disk. The shape of the disk is well
described by a flared disk model with an exponentially tapered outer edge,
consistent with models previously advocated on theoretical grounds and
supported by millimeter interferometry. The scattered light images and spectral
energy distribution are best fit by an unusually high total disk mass (gas+dust
assuming a ratio of 100:1) with a disk-to-star mass ratio of 0.16.Comment: Accepted for publication in Ap
Panchromatic observations and modeling of the HV Tau C edge-on disk
We present new high spatial resolution (<~ 0.1") 1-5 micron adaptive optics
images, interferometric 1.3 mm continuum and 12CO 2-1 maps, and 350 micron, 2.8
and 3.3 mm fluxes measurements of the HV Tau system. Our adaptive optics images
reveal an unusually slow orbital motion within the tight HV Tau AB pair that
suggests a highly eccentric orbit and/or a large deprojected physical
separation. Scattered light images of the HV Tau C edge-on protoplanetary disk
suggest that the anisotropy of the dust scattering phase function is almost
independent of wavelength from 0.8 to 5 micron, whereas the dust opacity
decreases significantly over the same range. The images further reveal a marked
lateral asymmetry in the disk that does not vary over a timescale of 2 years.
We further detect a radial velocity gradient in the disk in our 12CO map that
lies along the same position angle as the elongation of the continuum emission,
which is consistent with Keplerian rotation around an 0.5-1 Msun central star,
suggesting that it could be the most massive component in the triple system. We
use a powerful radiative transfer model to compute synthetic disk observations
and use a Bayesian inference method to extract constraints on the disk
properties. Each individual image, as well as the spectral energy distribution,
of HV Tau C can be well reproduced by our models with fully mixed dust provided
grain growth has already produced larger-than-interstellar dust grains.
However, no single model can satisfactorily simultaneously account for all
observations. We suggest that future attempts to model this source include more
complex dust properties and possibly vertical stratification. (Abridged)Comment: 26 pages, 11 figures, editorially accepted for publication in Ap
Anomalously Weak Solar Convection
Convection in the solar interior is thought to comprise structures on a
spectrum of scales. This conclusion emerges from phenomenological studies and
numerical simulations, though neither covers the proper range of dynamical
parameters of solar convection. Here, we analyze observations of the wavefield
in the solar photosphere using techniques of time-distance helioseismology to
image flows in the solar interior. We downsample and synthesize 900 billion
wavefield observations to produce 3 billion cross-correlations, which we
average and fit, measuring 5 million wave travel times. Using these travel
times, we deduce the underlying flow systems and study their statistics to
bound convective velocity magnitudes in the solar interior, as a function of
depth and spherical-harmonic degree . Within the wavenumber band
, Convective velocities are 20-100 times weaker than current
theoretical estimates. This suggests the prevalence of a different paradigm of
turbulence from that predicted by existing models, prompting the question: what
mechanism transports the heat flux of a solar luminosity outwards? Advection is
dominated by Coriolis forces for wavenumbers , with Rossby numbers
smaller than at , suggesting that the Sun may be
a much faster rotator than previously thought, and that large-scale convection
may be quasi-geostrophic. The fact that iso-rotation contours in the Sun are
not co-aligned with the axis of rotation suggests the presence of a latitudinal
entropy gradient.Comment: PNAS; 5 figures, 5 page
Neural Networks for Modeling and Control of Particle Accelerators
We describe some of the challenges of particle accelerator control, highlight
recent advances in neural network techniques, discuss some promising avenues
for incorporating neural networks into particle accelerator control systems,
and describe a neural network-based control system that is being developed for
resonance control of an RF electron gun at the Fermilab Accelerator Science and
Technology (FAST) facility, including initial experimental results from a
benchmark controller.Comment: 21 p
The Dark Energy Survey
We describe the Dark Energy Survey (DES), a proposed optical-near infrared
survey of 5000 sq. deg of the South Galactic Cap to ~24th magnitude in SDSS
griz, that would use a new 3 sq. deg CCD camera to be mounted on the Blanco 4-m
telescope at Cerro Telolo Inter-American Observatory (CTIO). The survey data
will allow us to measure the dark energy and dark matter densities and the dark
energy equation of state through four independent methods: galaxy clusters,
weak gravitational lensing tomography, galaxy angular clustering, and supernova
distances. These methods are doubly complementary: they constrain different
combinations of cosmological model parameters and are subject to different
systematic errors. By deriving the four sets of measurements from the same data
set with a common analysis framework, we will obtain important cross checks of
the systematic errors and thereby make a substantial and robust advance in the
precision of dark energy measurements.Comment: White Paper submitted to the Dark Energy Task Force, 42 page
- âŠ