34 research outputs found

    Wide Area X-ray Surveys for AGN and Starburst Galaxies

    Full text link
    While often the point sources in X-ray surveys are dominated by AGN, with the high sensitivity of modern X-ray telescopes such as Chandra and XMM-Newton normal/starburst galaxies are also being detected in large numbers. We have made use of Bayesian statistics for both the selection of galaxies from deep X-ray surveys and in the analysis of the luminosity functions for galaxies. These techniques can be used to similarly select galaxies from wide-area X-ray surveys and to analyze their luminosity function. The prospects for detecting galaxies and AGN from a proposed ``wide-deep'' XMM-Newton survey and from future wide-area X-ray survey missions (such as WFXT and eRosita) are also discussed.Comment: 7 pages, 5 figures. Conference proceedings in "Classification and Discovery in Large Astronomical Surveys", 2008, C.A.L. Bailer-Jones (ed.

    Self-Organizing Maps. An application to the OGLE data and the Gaia Science Alerts

    Full text link
    Self-Organizing Map (SOM) is a promising tool for exploring large multi-dimensional data sets. It is quick and convenient to train in an unsupervised fashion and, as an outcome, it produces natural clusters of data patterns. An example of application of SOM to the new OGLE-III data set is presented along with some preliminary results. Once tested on OGLE data, the SOM technique will also be implemented within the Gaia mission's photometry and spectrometry analysis, in particular, in so-called classification-based Science Alerts. SOM will be used as a basis of this system as the changes in brightness and spectral behaviour of a star can be easily and quickly traced on a map trained in advance with simulated and/or real data from other surveys.Comment: Presented as a poster at the "Classification and Discovery in Large Astronomical Surveys" meeting, Ringberg Castle, 14-17 October, 200

    A Review of Automated Stellar Spectral Classification and Surveys

    Get PDF
    Modern spectroscopic surveys and automated classifiers are becoming so inextricably linked that it is difficult even to summarize one without discussing the other. Some of the automated classifiers are being built because of current analysis needs, though with a clear anticipation of future, larger surveys. Other automated classifiers are being designed specifically for future surveys. Automated classifiers may be applied to databases already in hand, to real-time analysis at the telescope, or one day to on-board satellite analysis where the raw data are too bulky to save and transmit. In addition, many current spectroscopic surveys target galaxies. These surveys may contain stars either by accident or by a purposeful, but minority, assignment of input slits or fibers to stars. Nonetheless, these surveys still represent vast sources of stellar spectral data. Our review embarks by discussing current work, both on automated stellar classification and surveys, and then finishes with plans and portents for the future

    Finding Stellar Streams in Photometric Surveys

    Get PDF

    Learning about Galactic structure with Gaia astrometry

    Full text link
    The Gaia mission is reviewed together with the expected contents of the final catalogue. It is then argued that the ultimate goal of Galactic structure studies with Gaia astrometry should be to build a dynamical model of our galaxy which is capable of explaining the contents of the Gaia catalogue. This will be possible only by comparing predicted catalogue data to Gaia's actual measurements. To complement this approach the Gaia catalogue should be used to recalibrate photometric distance and abundance indicators across the HR-diagram in order to overcome the lack of precise parallax data at the faint end of the astrometric survey. Using complementary photometric and spectroscopic data from other surveys will be essential in this respect.Comment: Presented at the "Classification and Discovery in Large Astronomical Surveys" meeting, Ringberg Castle, 14-17 October, 200

    Parameter Estimation from an Optimal Projection in a Local Environment

    Full text link
    The parameter fit from a model grid is limited by our capability to reduce the number of models, taking into account the number of parameters and the non linear variation of the models with the parameters. The Local MultiLinear Regression (LMLR) algorithms allow one to fit linearly the data in a local environment. The MATISSE algorithm, developed in the context of the estimation of stellar parameters from the Gaia RVS spectra, is connected to this class of estimators. A two-steps procedure was introduced. A raw parameter estimation is first done in order to localize the parameter environment. The parameters are then estimated by projection on specific vectors computed for an optimal estimation. The MATISSE method is compared to the estimation using the objective analysis. In this framework, the kernel choice plays an important role. The environment needed for the parameter estimation can result from it. The determination of a first parameter set can be also avoided for this analysis. These procedures based on a local projection can be fruitfully applied to non linear parameter estimation if the number of data sets to be fitted is greater than the number of models

    Astronomical imaging: The theory of everything

    Full text link
    We are developing automated systems to provide homogeneous calibration meta-data for heterogeneous imaging data, using the pixel content of the image alone where necessary. Standardized and complete calibration meta-data permit generative modeling: A good model of the sky through wavelength and time--that is, a model of the positions, motions, spectra, and variability of all stellar sources, plus an intensity map of all cosmological sources--could synthesize or generate any astronomical image ever taken at any time with any equipment in any configuration. We argue that the best-fit or highest likelihood model of the data is also the best possible astronomical catalog constructed from those data. A generative model or catalog of this form is the best possible platform for automated discovery, because it is capable of identifying informative failures of the model in new data at the pixel level, or as statistical anomalies in the joint distribution of residuals from many images. It is also, in some sense, an astronomer's "theory of everything".Comment: a talk given at "Classification and Discovery in Large Astronomical Surveys", Ringberg Castle, 2008-10-1

    Time Variability of Quasars: the Structure Function Variance

    Full text link
    Significant progress in the description of quasar variability has been recently made by employing SDSS and POSS data. Common to most studies is a fundamental assumption that photometric observations at two epochs for a large number of quasars will reveal the same statistical properties as well-sampled light curves for individual objects. We critically test this assumption using light curves for a sample of \sim2,600 spectroscopically confirmed quasars observed about 50 times on average over 8 years by the SDSS stripe 82 survey. We find that the dependence of the mean structure function computed for individual quasars on luminosity, rest-frame wavelength and time is qualitatively and quantitatively similar to the behavior of the structure function derived from two-epoch observations of a much larger sample. We also reproduce the result that the variability properties of radio and X-ray selected subsamples are different. However, the scatter of the variability structure function for fixed values of luminosity, rest-frame wavelength and time is similar to the scatter induced by the variance of these quantities in the analyzed sample. Hence, our results suggest that, although the statistical properties of quasar variability inferred using two-epoch data capture some underlying physics, there is significant additional information that can be extracted from well-sampled light curves for individual objects.Comment: Presented at the "Classification and Discovery in Large Astronomical Surveys" meeting, Ringberg Castle, 14-17 October, 200

    Broad Absorption Line Quasar catalogues with Supervised Neural Networks

    Full text link
    We have applied a Learning Vector Quantization (LVQ) algorithm to SDSS DR5 quasar spectra in order to create a large catalogue of broad absorption line quasars (BALQSOs). We first discuss the problems with BALQSO catalogues constructed using the conventional balnicity and/or absorption indices (BI and AI), and then describe the supervised LVQ network we have trained to recognise BALQSOs. The resulting BALQSO catalogue should be substantially more robust and complete than BI- or AI-based ones.Comment: 5 pages, 3 figures, to appear in the proceedings of "Classification and Discovery in Large Astronomical Surveys", Ringberg Castle, 14-17 October 200

    The LSST Data Mining Research Agenda

    Full text link
    We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night); multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.Comment: 5 pages, Presented at the "Classification and Discovery in Large Astronomical Surveys" meeting, Ringberg Castle, 14-17 October, 200
    corecore