77,382 research outputs found

    K+A Galaxies as the Aftermath of Gas-Rich Mergers: Simulating the Evolution of Galaxies as Seen by Spectroscopic Surveys

    Full text link
    Models of poststarburst (or "K+A") galaxies are constructed by combining fully three-dimensional hydrodynamic simulations of galaxy mergers with radiative transfer calculations of dust attenuation. Spectral line catalogs are generated automatically from moderate-resolution optical spectra calculated as a function of merger progress in each of a large suite of simulations. The mass, gas fraction, orbital parameters, and mass ratio of the merging galaxies are varied systematically, showing that the lifetime and properties of the K+A phase are strong functions of merger scenario. K+A durations are generally less than ~0.1-0.3 Gyr, significantly shorter than the commonly assumed 1 Gyr, which is obtained only in rare cases, owing to a wide variation in star formation histories resulting from different orbital and progenitor configurations. Combined with empirical merger rates, the model lifetimes predict rapidly-rising K+A fractions as a function of redshift that are consistent with results of large spectroscopic surveys, resolving tension between the observed K+A abundance and that predicted when one assumes the K+A duration is the lifetime of A stars (~1 Gyr). The effects of dust attenuation, viewing angle, and aperture bias on our models are analyzed. In some cases, the K+A features are longer-lived and more pronounced when AGN feedback removes dust from the center, uncovering the young stars formed during the burst. In this picture, the K+A phase begins during or shortly after the bright starburst/AGN phase in violent mergers, and thus offers a unique opportunity to study the effects of quasar and star formation feedback on the gas reservoir and evolution of the remnant. Analytic fitting formulae are provided for the estimates of K+A incidence as a function of merger scenario.Comment: 26 pages, 13 figures; ApJ; minor changes to reflect accepted versio

    Quasars Are Not Light-Bulbs: Testing Models of Quasar Lifetimes with the Observed Eddington Ratio Distribution

    Full text link
    We use the observed distribution of Eddington ratios as a function of supermassive black hole (BH) mass to constrain models of AGN lifetimes and lightcurves. Given the observed AGN luminosity function, a model for AGN lifetimes (time above a given luminosity) translates directly to a predicted Eddington ratio distribution. Models for self-regulated BH growth, in which feedback produces a 'blowout' decay phase after some peak luminosity (shutting down accretion) make specific predictions for the lifetimes distinct from those expected if AGN are simply gas starved (without feedback) and very different from simple phenomenological 'light bulb' models. Present observations of the Eddington ratio distribution, spanning 5 decades in Eddington ratio, 3 in BH mass, and redshifts z=0-1, agree with the predictions of self-regulated models, and rule out 'light-bulb', pure exponential, and gas starvation models at high significance. We compare the Eddington ratio distributions at fixed BH mass and fixed luminosity (both are consistent, but the latter are much less constraining). We present empirical fits to the lifetime distribution and show how the Eddington ratio distributions place tight limits on AGN lifetimes at various luminosities. We use this to constrain the shape of the typical AGN lightcurve, and provide simple analytic fits. Given independent constraints on episodic lifetimes, most local BHs must have gained their mass in no more than a couple of bright episodes, in agreement with merger-driven fueling models.Comment: 21 pages, 13 figures, accepted to ApJ (revised to match accepted version; modeling and tests of redshift evolution added

    September 11: Symbolism And Responses To 9/11

    Get PDF
    Professors Hopkins and Hopkins review the impact of 9/11 as a symbol in American politics. Following the terrorist attacks, 9/11 became a simple reference condensing wide-ranging events and emotions. Various interpretations emerged about what caused 9/11 and enabled the attacks. The authors claim that 9/11 allowed US leaders to pursue certain policy prescriptions that otherwise would have been blocked. Among four possible prescriptions for responding to the attacks, the Bush administration chose a praetorian policy of preventive war, with Iraq as its first example. In the authors’ view, by pursuing an expansive but highly militarized response, the US has overlooked the need to alleviate the conditions that made 9/11 possible. The authors recommend that the US, as part of a multilateral effort, allocate major resources to expanding global public goods, including measures that strengthen barriers to proliferation, enhance fighting of global crime, and reduce incentives for terrorism, especially ones arising in failing states where distorted education and weak protection of human rights encourage organized terrorism

    Five Versions of Nature\u27s Locomotion

    Get PDF

    Marriage in Shakespeare: a community affair

    Get PDF

    Let\u27s Call Them Glimpses

    Get PDF

    Sculpture and Space

    Get PDF
    What is distinctive about sculpture as an artform? I argue that it is related to the space around it as painting and the other pictorial arts are not. I expound and develop Langer's suggestive comments on this issue, before asking what the major strengths and weaknesses of that position might be

    Wittgenstein and the Life of Signs

    Get PDF
    Articl

    Is the Quality of Numerical Subroutine Code Improving?

    Get PDF
    We begin by using a software metric tool to generate a number of software complexity measures and we investigate how these values may be used to determine subroutines which are likely to be of substandard quality. Following this we look at how these metric values have changed over the years. First we consider a number of freely available Fortran libraries (Eispack, Linpack and Lapack) which have been constructed by teams. In order to ensure a fair comparison we use a restructuring tool to transform original Fortran 66 code into Fortran 77. We then consider the Fortran codes from the Collected Algorithms from the ACM (CALGO) to see whether we can detect the same trends in software written by the general numerical community. Our measurements show that although the standard of code in the freely available libraries does appear to have improved over time these libraries still contain routines which are effectively unmaintainable and untestable. Applied to the CALGO codes the metrics indicate a very conservative approach to software engineering and there is no evidence of improvement, during the last twenty years, in the qualities under discussion
    • …
    corecore