2,153 research outputs found
Quasars as Probes of Late Reionization and Early Structure Formation
Observations of QSOs at z ~ 5.7 - 6.4 show the appearance of Gunn-Peterson
troughs around z ~ 6, and a change in the slope of the IGM optical depth tau(z)
near z ~ 5.5. These results are interpreted as a signature of the end of the
reionization era, which probably started at considerably higher redshifts.
However, there also appears to be a substantial cosmic variance in the
transmission of the IGM, both along some lines of sight, and among different
lines of sight, in this intriguing redshift regime. We suggest that this is
indicative of a spatially uneven reionization, possibly caused by the
bias-driven primordial clustering of the reionization sources. There is also
some independent evidence for a strong clustering of QSOs at z ~ 4 - 5 and
galaxies around them, supporting the idea of the strong biasing of the first
luminous sources at these redshifts. Larger samples of high-z QSOs are needed
in order to provide improved, statistically significant constraints for the
models of these phenomena. We expect that the Palomar-Quest (PQ) survey will
soon provide a new set of QSOs to be used as cosmological probes in this
redshift regime.Comment: To appear in proceedings of UC Irvine May 2005 workshop on "First
Light & Reionization", eds. E. Barton & A. Cooray, New Astronomy Reviews, in
pres
Exploring the Time Domain With Synoptic Sky Surveys
Synoptic sky surveys are becoming the largest data generators in astronomy,
and they are opening a new research frontier, that touches essentially every
field of astronomy. Opening of the time domain to a systematic exploration will
strengthen our understanding of a number of interesting known phenomena, and
may lead to the discoveries of as yet unknown ones. We describe some lessons
learned over the past decade, and offer some ideas that may guide strategic
considerations in planning and execution of the future synoptic sky surveys.Comment: Invited talk, to appear in proc. IAU SYmp. 285, "New Horizons in Time
Domain Astronomy", eds. E. Griffin et al., Cambridge Univ. Press (2012).
Latex file, 6 pages, style files include
Multivariate statistical analysis software technologies for astrophysical research involving large data bases
The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multi parameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resources
Some statistical and computational challenges, and opportunities in astronomy
The data complexity and volume of astronomical findings have increased in recent decades due to major technological improvements in instrumentation and data collection methods. The contemporary astronomer is flooded with terabytes of raw data that produce enormous multidimensional catalogs of objects (stars, galaxies, quasars, etc.) numbering in the billions, with hundreds of measured numbers for each object. The astronomical community thus faces a key task: to enable efficient and objective scientific exploitation of enormous multifaceted data sets and the complex links between data and astrophysical theory. In recognition of this task, the National Virtual Observatory (NVO) initiative recently emerged to federate numerous large digital sky archives, and to develop tools to explore and understand these vast volumes of data. The effective use of such integrated massive data sets presents a variety of new challenging statistical and algorithmic problems that require methodological advances. An interdisciplinary team of statisticians, astronomers and computer scientists from The Pennsylvania State University, California Institute of Technology and Carnegie Mellon University is developing statistical methodology for the NVO. A brief glimpse into the Virtual Observatory and the work of the Penn State-led team is provided here
Two searches for primeval galaxies
A number of active galaxies are now known at very large redshifts, some of them even have properties suggestive of galaxies in the process of formation. They commonly show strong Ly-alpha emission, at least some of which appears to be ionized by young stars. Inferred star formation rates are in the range approximately = 100-500 solar mass/yr. An important question is: are there radio-quiet, field counterparts of these systems at comparable redshifts? Whereas, we are probably already observing some evolutionary and formative processes of distant radio galaxies, the ultimate goal is to observe normal galaxies at the epoch when most of their stars form. We have, thus, started a search for emission-line objects at large redshifts, ostensibly young and forming galaxies. Our method is to search for strong line emission (hopefully Ly alpha) employing two techniques: a direct, narrow-band imaging search, using a Fabry-Perot interferometer; and a serendipitous long-slit spectroscopic search
Virtual Astronomy, Information Technology, and the New Scientific Methodology
All sciences, including astronomy, are now entering the era of information abundance. The exponentially increasing volume and complexity of modern data sets promises to transform the scientific practice, but also poses a number of common technological challenges. The Virtual Observatory concept is the astronomical community's response to these challenges: it aims to harness the progress in information technology in the service of astronomy, and at the same time provide a valuable testbed for information technology and applied computer science. Challenges broadly fall into two categories: data handling (or "data farming"), including issues such as archives, intelligent storage, databases, interoperability, fast networks, etc., and data mining, data understanding, and knowledge discovery, which include issues such as automated clustering and classification, multivariate correlation searches, pattern recognition, visualization in highly hyperdimensional parameter spaces, etc., as well as various applications of machine learning in these contexts. Such techniques are forming a methodological foundation for science with massive and complex data sets in general, and are likely to have a much broather impact on the modern society, commerce, information economy, security, etc. There is a powerful emerging synergy between the
computationally enabled science and the science-driven computing, which will drive the progress in science, scholarship, and many other venues in the 21st century
- …
