3,370 research outputs found

    Abiotic O2_{2} Levels on Planets around F, G, K, and M Stars: Possible False Positives for Life?

    Full text link
    In the search for life on Earth-like planets around other stars, the first (and likely only) information will come from the spectroscopic characterization of the planet's atmosphere. Of the countless number of chemical species terrestrial life produces, only a few have the distinct spectral features and the necessary atmospheric abundance to be detectable. The easiest of these species to observe in Earth's atmosphere is O2_{2} (and its photochemical byproduct, O3_{3}). But O2_{2} can also be produced abiotically by photolysis of CO2_{2}, followed by recombination of O atoms with each other. CO is produced in stoichiometric proportions. Whether O2_{2} and CO can accumulate to appreciable concentrations depends on the ratio of far-UV to near-UV radiation coming from the planet's parent star and on what happens to these gases when they dissolve in a planet's oceans. Using a one-dimensional photochemical model, we demonstrate that O2_{2} derived from CO2_{2} photolysis should not accumulate to measurable concentrations on planets around F- and G-type stars. K-star, and especially M-star planets, however, may build up O2_{2} because of the low near-UV flux from their parent stars, in agreement with some previous studies. On such planets, a 'false positive' for life is possible if recombination of dissolved CO and O2_{2} in the oceans is slow and if other O2_{2} sinks (e.g., reduced volcanic gases or dissolved ferrous iron) are small. O3_{3}, on the other hand, could be detectable at UV wavelengths (λ\lambda < 300 nm) for a much broader range of boundary conditions and stellar types.Comment: 20 pages text, 9 figure

    Thing Theory

    Get PDF
    This article is an extended review of Graham Harman's Heidegger Explained: From Phenomenon to Thing. The paper explains Harman's argument that Heidegger's famous broken tool incident - the account that introduces a critique of presence based on the withdrawn dimensions of things - has a much greater relevance than is usually imagined. It explores Harman's extrapolations from Heidegger to rethink the very nature of objects - or things in themselves, their relations to each other, and their own unfathomable inner being. The paper goes on to note the implications of this argument for thinking more generally about relationality, space, and the more-than-human

    Nuclear-resonant electron scattering

    Full text link
    We investigate nuclear-resonant electron scattering as occurring in the two-step process of nuclear excitation by electron capture (NEEC) followed by internal conversion. The nuclear excitation and decay are treated by a phenomenological collective model in which nuclear states and transition probabilities are described by experimental parameters. We present capture rates and resonant strengths for a number of heavy ion collision systems considering various scenarios for the resonant electron scattering process. The results show that for certain cases resonant electron scattering can have significantly larger resonance strengths than NEEC followed by the radiative decay of the nucleus. We discuss the impact of our findings on the possible experimental observation of NEEC.Comment: 24 pages, 2 plots, 5 table

    Specialising Software for Different Downstream Applications Using Genetic Improvement and Code Transplantation

    Get PDF
    OAPA Genetic improvement uses computational search to improve existing software while retaining its partial functionality. Genetic improvement has previously been concerned with improving a system with respect to all possible usage scenarios. In this paper, we show how genetic improvement can also be used to achieve specialisation to a specific set of usage scenarios. We use genetic improvement to evolve faster versions of a C++ program, a Boolean satisfiability solver called MiniSAT, specialising it for three applications. Our specialised solvers achieve between 4% and 36% execution time improvement, which is commensurate with efficiency gains achievable using human expert optimisation for the general solver. We also use genetic improvement to evolve faster versions of an image processing tool called ImageMagick, utilising code from GraphicsMagick, another image processing tool which was forked from it. We specialise the format conversion functionality to black & amp; white images and colour images only. Our specialised versions achieve up to 3% execution time improvement

    Sensitivity of Ag/Al Interface Specific Resistances to Interfacial Intermixing

    Full text link
    We have measured an Ag/Al interface specific resistance, 2AR(Ag/Al)(111) = 1.4 fOhm-m^2, that is twice that predicted for a perfect interface, 50% larger than for a 2 ML 50%-50% alloy, and even larger than our newly predicted 1.3 fOhmm^2 for a 4 ML 50%-50% alloy. Such a large value of 2ARAg/Al(111) confirms a predicted sensitivity to interfacial disorder and suggests an interface greater than or equal to 4 ML thick. From our calculations, a predicted anisotropy ratio, 2AR(Ag/Al)(001)/2AR(Ag/Al)(111), of more then 4 for a perfect interface, should be reduced to less than 2 for a 4 ML interface, making it harder to detect any such anisotropy.Comment: 3 pages, 2 figures, 1 table. In Press: Journal of Applied Physic

    Theory of nuclear excitation by electron capture for heavy ions

    Full text link
    We investigate the resonant process of nuclear excitation by electron capture, in which a continuum electron is captured into a bound state of an ion with the simultaneous excitation of the nucleus. In order to derive the cross section a Feshbach projection operator formalism is introduced. Nuclear states and transitions are described by a nuclear collective model and making use of experimental data. Transition rates and total cross sections for NEEC followed by the radiative decay of the excited nucleus are calculated for various heavy ion collision systems

    Modeling canopy-induced turbulence in the Earth system: a unified parameterization of turbulent exchange within plant canopies and the roughness sublayer (CLM-ml v0)

    Get PDF
    Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0) to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5) at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin–Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations

    Genetic Improvement of Software: a Comprehensive Survey

    Get PDF
    Genetic improvement (GI) uses automated search to find improved versions of existing software. We present a comprehensive survey of this nascent field of research with a focus on the core papers in the area published between 1995 and 2015. We identified core publications including empirical studies, 96% of which use evolutionary algorithms (genetic programming in particular). Although we can trace the foundations of GI back to the origins of computer science itself, our analysis reveals a significant upsurge in activity since 2012. GI has resulted in dramatic performance improvements for a diverse set of properties such as execution time, energy and memory consumption, as well as results for fixing and extending existing system functionality. Moreover, we present examples of research work that lies on the boundary between GI and other areas, such as program transformation, approximate computing, and software repair, with the intention of encouraging further exchange of ideas between researchers in these fields

    Evaluating implicit feedback models using searcher simulations

    Get PDF
    In this article we describe an evaluation of relevance feedback (RF) algorithms using searcher simulations. Since these algorithms select additional terms for query modification based on inferences made from searcher interaction, not on relevance information searchers explicitly provide (as in traditional RF), we refer to them as implicit feedback models. We introduce six different models that base their decisions on the interactions of searchers and use different approaches to rank query modification terms. The aim of this article is to determine which of these models should be used to assist searchers in the systems we develop. To evaluate these models we used searcher simulations that afforded us more control over the experimental conditions than experiments with human subjects and allowed complex interaction to be modeled without the need for costly human experimentation. The simulation-based evaluation methodology measures how well the models learn the distribution of terms across relevant documents (i.e., learn what information is relevant) and how well they improve search effectiveness (i.e., create effective search queries). Our findings show that an implicit feedback model based on Jeffrey's rule of conditioning outperformed other models under investigation

    The localization transition at finite temperatures: electric and thermal transport

    Full text link
    The Anderson localization transition is considered at finite temperatures. This includes the electrical conductivity as well as the electronic thermal conductivity and the thermoelectric coefficients. An interesting critical behavior of the latter is found. A method for characterizing the conductivity critical exponent, an important signature of the transition, using the conductivity and thermopower measurements, is outlined.Comment: Article for the book: "50 Years of Anderson Localization", edited by E. Abrahams (World Scientific, Singapore, 2010
    corecore