584 research outputs found
Some statistical and computational challenges, and opportunities in astronomy
The data complexity and volume of astronomical findings have increased in recent decades due to major technological improvements in instrumentation and data collection methods. The contemporary astronomer is flooded with terabytes of raw data that produce enormous multidimensional catalogs of objects (stars, galaxies, quasars, etc.) numbering in the billions, with hundreds of measured numbers for each object. The astronomical community thus faces a key task: to enable efficient and objective scientific exploitation of enormous multifaceted data sets and the complex links between data and astrophysical theory. In recognition of this task, the National Virtual Observatory (NVO) initiative recently emerged to federate numerous large digital sky archives, and to develop tools to explore and understand these vast volumes of data. The effective use of such integrated massive data sets presents a variety of new challenging statistical and algorithmic problems that require methodological advances. An interdisciplinary team of statisticians, astronomers and computer scientists from The Pennsylvania State University, California Institute of Technology and Carnegie Mellon University is developing statistical methodology for the NVO. A brief glimpse into the Virtual Observatory and the work of the Penn State-led team is provided here
A new method for the determination of the growth rate from galaxy redshift surveys
Given a redshift survey of galaxies with measurements of apparent magnitudes,
we present a novel method for measuring the growth rate of
cosmological linear perturbations. We use the galaxy distribution within the
survey to solve for the peculiar velocity field which depends in linear
perturbation theory on , where is the bias factor of the
galaxy distribution. The recovered line-of-sight peculiar velocities are
subtracted from the redshifts to derive the distances, which thus allows an
estimate of the absolute magnitude of each galaxy. A constraint on is
then found by minimizing the spread of the estimated magnitudes from their
distribution function. We apply the method to the all sky Two-MASS
Redhsift Survey (2MRS) and derive at , remarkably
consistent with our previous estimate from the velocity-velocity comparison.
The method could easily be applied to subvolumes extracted from the SDSS survey
to derive the growth rate at . Further, it should also be
applicable to ongoing and future spectroscopic redshift surveys to trace the
evolution of to . Constraints obtained from this method are
entirely independent from those obtained from the two-dimensional distortion of
and provide an important check on , as alternative gravity
models predict observable differences.Comment: 9pages, 1figure Typos corrected. A slight change in the Discussion
and Acknowledgemen
Massive Science with VO and Grids
There is a growing need for massive computational resources for the analysis
of new astronomical datasets. To tackle this problem, we present here our first
steps towards marrying two new and emerging technologies; the Virtual
Observatory (e.g, AstroGrid) and the computational grid (e.g. TeraGrid, COSMOS
etc.). We discuss the construction of VOTechBroker, which is a modular software
tool designed to abstract the tasks of submission and management of a large
number of computational jobs to a distributed computer system. The broker will
also interact with the AstroGrid workflow and MySpace environments. We discuss
our planned usages of the VOTechBroker in computing a huge number of n-point
correlation functions from the SDSS data and massive model-fitting of millions
of CMBfast models to WMAP data. We also discuss other applications including
the determination of the XMM Cluster Survey selection function and the
construction of new WMAP maps.Comment: Invited talk at ADASSXV conference published as ASP Conference
Series, Vol. XXX, 2005 C. Gabriel, C. Arviset, D. Ponz and E. Solano, eds. 9
page
- …