3,982 research outputs found

    The Merging History of Massive Black Holes

    Full text link
    We investigate a hierarchical structure formation scenario describing the evolution of a Super Massive Black Holes (SMBHs) population. The seeds of the local SMBHs are assumed to be 'pregalactic' black holes, remnants of the first POPIII stars. As these pregalactic holes become incorporated through a series of mergers into larger and larger halos, they sink to the center owing to dynamical friction, accrete a fraction of the gas in the merger remnant to become supermassive, form a binary system, and eventually coalesce. A simple model in which the damage done to a stellar cusps by decaying BH pairs is cumulative is able to reproduce the observed scaling relation between galaxy luminosity and core size. An accretion model connecting quasar activity with major mergers and the observed BH mass-velocity dispersion correlation reproduces remarkably well the observed luminosity function of optically-selected quasars in the redshift range 1<z<5. We finally asses the potential observability of the gravitational wave background generated by the cosmic evolution of SMBH binaries by the planned space-born interferometer LISA.Comment: 4 pages, 2 figures, Contribute to "Multiwavelength Cosmology", Mykonos, Greece, June 17-20, 200

    A Non-Sequential Representation of Sequential Data for Churn Prediction

    Get PDF
    We investigate the length of event sequence giving best predictions when using a continuous HMM approach to churn prediction from sequential data. Motivated by observations that predictions based on only the few most recent events seem to be the most accurate, a non-sequential dataset is constructed from customer event histories by averaging features of the last few events. A simple K-nearest neighbor algorithm on this dataset is found to give significantly improved performance. It is quite intuitive to think that most people will react only to events in the fairly recent past. Events related to telecommunications occurring months or years ago are unlikely to have a large impact on a customer’s future behaviour, and these results bear this out. Methods that deal with sequential data also tend to be much more complex than those dealing with simple nontemporal data, giving an added benefit to expressing the recent information in a non-sequential manner

    A Hybrid N-body--Coagulation Code for Planet Formation

    Full text link
    We describe a hybrid algorithm to calculate the formation of planets from an initial ensemble of planetesimals. The algorithm uses a coagulation code to treat the growth of planetesimals into oligarchs and explicit N-body calculations to follow the evolution of oligarchs into planets. To validate the N-body portion of the algorithm, we use a battery of tests in planetary dynamics. Several complete calculations of terrestrial planet formation with the hybrid code yield good agreement with previously published calculations. These results demonstrate that the hybrid code provides an accurate treatment of the evolution of planetesimals into planets.Comment: Astronomical Journal, accepted; 33 pages + 11 figure

    The Backstroke framework for source level reverse computation applied to parallel discrete event simulation

    Full text link
    This report introduces Backstroke, a new open source framework for the automatic generation of reverse code for functions written in C++. Backstroke enables reverse computation for optimistic parallel discrete event simulations. It is built over the ROSE open- source compiler infrastructure, and handles complex C++ features including pointers and pointer types, arrays, function and method calls, class types. inheritance, polymorphism, virtual functions, abstract classes, templated classes and containers. Backstroke also introduces new program inversion techniques based on advanced compiler analysis tools built into ROSE. We explore and illustrate some of the complex language and semantic issues that arise in generating correct reverse code for C++ functions

    An intelligent assistant for exploratory data analysis

    Get PDF
    In this paper we present an account of the main features of SNOUT, an intelligent assistant for exploratory data analysis (EDA) of social science survey data that incorporates a range of data mining techniques. EDA has much in common with existing data mining techniques: its main objective is to help an investigator reach an understanding of the important relationships ina data set rather than simply develop predictive models for selectd variables. Brief descriptions of a number of novel techniques developed for use in SNOUT are presented. These include heuristic variable level inference and classification, automatic category formation, the use of similarity trees to identify groups of related variables, interactive decision tree construction and model selection using a genetic algorithm

    Sub-femtosecond absolute timing precision with a 10 GHz hybrid photonic-microwave oscillator

    Full text link
    We present an optical-electronic approach to generating microwave signals with high spectral purity. By circumventing shot noise and operating near fundamental thermal limits, we demonstrate 10 GHz signals with an absolute timing jitter for a single hybrid oscillator of 420 attoseconds (1Hz - 5 GHz)

    Robust Machine Learning Applied to Astronomical Datasets I: Star-Galaxy Classification of the SDSS DR3 Using Decision Trees

    Get PDF
    We provide classifications for all 143 million non-repeat photometric objects in the Third Data Release of the Sloan Digital Sky Survey (SDSS) using decision trees trained on 477,068 objects with SDSS spectroscopic data. We demonstrate that these star/galaxy classifications are expected to be reliable for approximately 22 million objects with r < ~20. The general machine learning environment Data-to-Knowledge and supercomputing resources enabled extensive investigation of the decision tree parameter space. This work presents the first public release of objects classified in this way for an entire SDSS data release. The objects are classified as either galaxy, star or nsng (neither star nor galaxy), with an associated probability for each class. To demonstrate how to effectively make use of these classifications, we perform several important tests. First, we detail selection criteria within the probability space defined by the three classes to extract samples of stars and galaxies to a given completeness and efficiency. Second, we investigate the efficacy of the classifications and the effect of extrapolating from the spectroscopic regime by performing blind tests on objects in the SDSS, 2dF Galaxy Redshift and 2dF QSO Redshift (2QZ) surveys. Given the photometric limits of our spectroscopic training data, we effectively begin to extrapolate past our star-galaxy training set at r ~ 18. By comparing the number counts of our training sample with the classified sources, however, we find that our efficiencies appear to remain robust to r ~ 20. As a result, we expect our classifications to be accurate for 900,000 galaxies and 6.7 million stars, and remain robust via extrapolation for a total of 8.0 million galaxies and 13.9 million stars. [Abridged]Comment: 27 pages, 12 figures, to be published in ApJ, uses emulateapj.cl

    Inducing safer oblique trees without costs

    Get PDF
    Decision tree induction has been widely studied and applied. In safety applications, such as determining whether a chemical process is safe or whether a person has a medical condition, the cost of misclassification in one of the classes is significantly higher than in the other class. Several authors have tackled this problem by developing cost-sensitive decision tree learning algorithms or have suggested ways of changing the distribution of training examples to bias the decision tree learning process so as to take account of costs. A prerequisite for applying such algorithms is the availability of costs of misclassification. Although this may be possible for some applications, obtaining reasonable estimates of costs of misclassification is not easy in the area of safety. This paper presents a new algorithm for applications where the cost of misclassifications cannot be quantified, although the cost of misclassification in one class is known to be significantly higher than in another class. The algorithm utilizes linear discriminant analysis to identify oblique relationships between continuous attributes and then carries out an appropriate modification to ensure that the resulting tree errs on the side of safety. The algorithm is evaluated with respect to one of the best known cost-sensitive algorithms (ICET), a well-known oblique decision tree algorithm (OC1) and an algorithm that utilizes robust linear programming

    Collisional Dark Matter and the Origin of Massive Black Holes

    Full text link
    If the cosmological dark matter is primarily in the form of an elementary particle which has cross section and mass for self-interaction having a ratio similar to that of ordinary nuclear matter, then seed black holes (formed in stellar collapse) will grow in a Hubble time, due to accretion of the dark matter, to a mass range 10^6 - 10^9 solar masses. Furthermore, the dependence of the final black hole mass on the galaxy velocity dispersion will be approximately as observed and the growth rate will show a time dependence consistent with observations. Other astrophysical consequences of collisional dark matter and tests of the idea are noted.Comment: 7 pages, no figures, LaTeX2e, Accepted for publication in Phys. Rev. Lett. Changed conten

    Infrared conductivity of a d_{x^2-y^2}-wave superconductor with impurity and spin-fluctuation scattering

    Full text link
    Calculations are presented of the in-plane far-infrared conductivity of a d_{x^2-y^2}-wave superconductor, incorporating elastic scattering due to impurities and inelastic scattering due to spin fluctuations. The impurity scattering is modeled by short-range potential scattering with arbitrary phase shift, while scattering due to spin fluctuations is calculated within a weak-coupling Hubbard model picture. The conductivity is characterized by a low-temperature residual Drude feature whose height and weight are controlled by impurity scattering, as well as a broad peak centered at 4 Delta_0 arising from clean-limit inelastic processes. Results are in qualitative agreement with experiment despite missing spectral weight at high energies.Comment: 29 pages (11 tar-compressed-uuencoded Postscript figures), REVTeX 3.0 with epsf macro
    • …
    corecore