15,582 research outputs found

    Unifying Parsimonious Tree Reconciliation

    Full text link
    Evolution is a process that is influenced by various environmental factors, e.g. the interactions between different species, genes, and biogeographical properties. Hence, it is interesting to study the combined evolutionary history of multiple species, their genes, and the environment they live in. A common approach to address this research problem is to describe each individual evolution as a phylogenetic tree and construct a tree reconciliation which is parsimonious with respect to a given event model. Unfortunately, most of the previous approaches are designed only either for host-parasite systems, for gene tree/species tree reconciliation, or biogeography. Hence, a method is desirable, which addresses the general problem of mapping phylogenetic trees and covering all varieties of coevolving systems, including e.g., predator-prey and symbiotic relationships. To overcome this gap, we introduce a generalized cophylogenetic event model considering the combinatorial complete set of local coevolutionary events. We give a dynamic programming based heuristic for solving the maximum parsimony reconciliation problem in time O(n^2), for two phylogenies each with at most n leaves. Furthermore, we present an exact branch-and-bound algorithm which uses the results from the dynamic programming heuristic for discarding partial reconciliations. The approach has been implemented as a Java application which is freely available from http://pacosy.informatik.uni-leipzig.de/coresym.Comment: Peer-reviewed and presented as part of the 13th Workshop on Algorithms in Bioinformatics (WABI2013

    A NuSTAR observation of the fast symbiotic nova V745 Sco in outburst

    Get PDF
    The fast recurrent nova V745 Sco was observed in the 3-79 keV X-rays band with NuSTAR 10 days after the optical discovery. The measured X-ray emission is consistent with a collisionally ionized optically thin plasma at temperature of about 2.7 keV. A prominent iron line observed at 6.7 keV does not require enhanced iron in the ejecta. We attribute the X-ray flux to shocked circumstellar material. No X-ray emission was observed at energies above 20 keV, and the flux in the 3-20 keV range was about 1.6 ×\times 10−11^{-11} erg cm−2^{-2} s−1^{-1}. The emission measure indicates an average electron density of order of 107^7 cm−3^{-3}. The X-ray flux in the 0.3-10 keV band almost simultaneously measured with Swift was about 40 times larger, mainly due to the luminous central supersoft source emitting at energy below 1 keV. The fact that the NuSTAR spectrum cannot be fitted with a power law, and the lack of hard X-ray emission, allow us to rule out Comptonized gamma rays, and to place an upper limit of the order of 10−11^{-11} erg cm−2^{-2} s−1^{-1} on the gamma-ray flux of the nova on the tenth day of the outburst.Comment: in press in Monthly Notices of the Royal Astronomical Society, 201

    An approach to software cost estimation

    Get PDF
    A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented

    Submm-bright QSOs at z~2: signposts of co-evolution at high z

    Full text link
    We have assembled a sample of 5 X-ray and submm-luminous z~2 QSOs which are therefore both growing their central black holes through accretion and forming stars copiously at a critical epoch. Hence, they are good laboratories to investigate the co-evolution of star formation and AGN. We have performed a preliminary analysis of the AGN and SF contributions to their UV-to-FIR SEDs, fitting them with simple direct (disk), reprocessed (torus) and star formation components. All three are required by the data and hence we confirm that these objects are undergoing strong star formation in their host galaxies at rates 500-2000 Msun/y. Estimates of their covering factors are between about 30 and 90%. In the future, we will assess the dependence of these results on the particular models used for the components and relate their observed properties to the intrinsice of the central engine and the SF material, as well as their relevance for AGN-galaxy coevolution.Comment: 6 pages, 2 figures, contributed talk to "Nuclei of Seyfert galaxies and QSOs - Central engine & conditions of star formation" November 6-8, 2012. MPIfR, Bonn, Germany. Po

    X-ray absorbed QSOs and the QSO evolutionary sequence

    Get PDF
    Unexpected in the AGN unified scheme, there exists a population of broad-line z~2 QSOs which have heavily absorbed X-ray spectra. These objects constitute 10% of the population at luminosities and redshifts characteristic of the main producers of QSO luminosity in the Universe. Our follow up observations in the submm show that these QSOs are often embedded in ultraluminous starburst galaxies, unlike most QSOs at the same redshifts and luminosities. The radically different star formation properties between the absorbed and unabsorbed QSOs implies that the X-ray absorption is unrelated to the torus invoked in AGN unification schemes. Instead, these results suggest that the objects represent a transitional phase in an evolutionary sequence relating the growth of massive black holes to the formation of galaxies. The most puzzling question about these objects has always been the nature of the X-ray absorber. We present our study of the X-ray absorbers based on deep (50-100ks) XMM-Newton spectroscopy. We show that the absorption is most likely due to a dense ionised wind driven by the QSO. This wind could be the mechanism by which the QSO terminates the star formation in the host galaxy, and ends the supply of accretion material, to produce the present day black hole/spheroid mass ratio.Comment: 4 pages, to appear in conference proceedings "Studying Galaxy Evolution with Spitzer and Herschel

    An improved method of constructing binned luminosity functions

    Get PDF
    We show that binned differential luminosity functions constructed using the 1/Va method have a significant systematic error for objects close to their parent sample's flux limit(s). This is particularly noticeable when luminosity functions are produced for a number of different redshift ranges as is common in the study of AGN or galaxy evolution. We present a simple method of constructing a binned luminosity function which overcomes this problem and has a number of other advantages over the traditional 1/Va method. We also describe a practical method for comparing binned and model luminosity functions, by calculating the expectation values of the binned luminosity function from the model. Binned luminosity functions produced by the two methods are compared for simulated data and for the Large Bright QSO Survey (LBQS). It is shown that the 1/Va method produces a very misleading picture of evolution in the LBQS. The binned luminosity function of the LBQS is then compared to a model two power law luminosity function undergoing pure luminosity evolution from Boyle et al. (1991). The comparison is made using a model luminosity function averaged over each redshift shell, and using the expectation values for the binned luminosity function calculated from the model. The luminosity function averaged in each redshift shell gives a misleading impression that the model over predicts the number of QSOs at low luminosity even when model and data are consistent. The expectation values show that there are significant differences between model and data: the model overpredicts the number of low luminosity sources at both low and high redshift. The luminosity function does not appear to steepen relative to the model as redshift increases
    • …
    corecore