3,231 research outputs found

    The Abundance of Molecular Hydrogen and its Correlation with Midplane Pressure in Galaxies: Non-Equilibrium, Turbulent, Chemical Models

    Full text link
    Observations of spiral galaxies show a strong linear correlation between the ratio of molecular to atomic hydrogen surface density R_mol and midplane pressure. To explain this, we simulate three-dimensional, magnetized turbulence, including simplified treatments of non-equilibrium chemistry and the propagation of dissociating radiation, to follow the formation of H_2 from cold atomic gas. The formation time scale for H_2 is sufficiently long that equilibrium is not reached within the 20-30 Myr lifetimes of molecular clouds. The equilibrium balance between radiative dissociation and H_2 formation on dust grains fails to predict the time-dependent molecular fractions we find. A simple, time-dependent model of H_2 formation can reproduce the gross behavior, although turbulent density perturbations increase molecular fractions by a factor of few above it. In contradiction to equilibrium models, radiative dissociation of molecules plays little role in our model for diffuse radiation fields with strengths less than ten times that of the solar neighborhood, because of the effective self-shielding of H_2. The observed correlation of R_mol with pressure corresponds to a correlation with local gas density if the effective temperature in the cold neutral medium of galactic disks is roughly constant. We indeed find such a correlation of R_mol with density. If we examine the value of R_mol in our local models after a free-fall time at their average density, as expected for models of molecular cloud formation by large-scale gravitational instability, our models reproduce the observed correlation over more than an order of magnitude range in density.Comment: 24 pages, 4 figures, accepted for publication in Astrophys. J, changes include addition of models with higher radiation fields and substantial clarification of the narrativ

    The Influence of Metallicity on Star Formation in Protogalaxies

    Full text link
    In cold dark matter cosmological models, the first stars to form are believed to do so within small protogalaxies. We wish to understand how the evolution of these early protogalaxies changes once the gas forming them has been enriched with small quantities of heavy elements, which are produced and dispersed into the intergalactic medium by the first supernovae. Our initial conditions represent protogalaxies forming within a fossil H II region, a previously ionized region that has not yet had time to cool and recombine. We study the influence of low levels of metal enrichment on the cooling and collapse of ionized gas in small protogalactic halos using three-dimensional, smoothed particle hydrodynamics (SPH) simulations that incorporate the effects of the appropriate chemical and thermal processes. Our previous simulations demonstrated that for metallicities Z < 0.001 Z_sun, metal line cooling alters the density and temperature evolution of the gas by less than 1% compared to the metal-free case at densities below 1 cm-3) and temperatures above 2000 K. Here, we present the results of high-resolution simulations using particle splitting to improve resolution in regions of interest. These simulations allow us to address the question of whether there is a critical metallicity above which fine structure cooling from metals allows efficient fragmentation to occur, producing an initial mass function (IMF) resembling the local Salpeter IMF, rather than only high-mass stars.Comment: 3 pages, 2 figures, First Stars III conference proceeding

    Focus on the Positives: Self-Supervised Learning for Biodiversity Monitoring

    Get PDF
    We address the problem of learning self-supervised representations from unlabeled image collections. Unlike existing approaches that attempt to learn useful features by maximizing similarity between augmented versions of each input image or by speculatively picking negative samples, we instead also make use of the natural variation that occurs in image collections that are captured using static monitoring cameras. To achieve this, we exploit readily available context data that encodes information such as the spatial and temporal relationships between the input images. We are able to learn representations that are surprisingly effective for downstream supervised classification, by first identifying high probability positive pairs at training time, i.e. those images that are likely to depict the same visual concept. For the critical task of global biodiversity monitoring, this results in image features that can be adapted to challenging visual species classification tasks with limited human supervision. We present results on four different camera trap image collections, across three different families of self-supervised learning methods, and show that careful image selection at training time results in superior performance compared to existing baselines such as conventional self-supervised training and transfer learning

    The EU and Critical Crisis Transformation: The Evolution of a Policy Concept

    Get PDF
    While often caused by conflict, crises are treated by the EU as a phenomenon of their own. Contemporary EU crisis management represents a watering down of normative EU approaches to peacebuilding, reduced to a technical exercise with the limited ambition to contain spillover effects of crises. In theoretical terms this is a reversal, which tilts intervention towards EU security interests and avoids engagement with the root causes of the crises. This paper develops a novel crisis response typology derived from conflict theory, which ranges from crisis management to crisis resolution and (critical) crisis transformation. By drawing on EU interventions in Libya, Mali and Ukraine, the paper demonstrates that basic crisis management approaches are pre-eminent in practice. More promising innovations remain largely confined to the realms of discourse and policy documentation

    Guarantees on Robot System Performance Using Stochastic Simulation Rollouts

    Full text link
    We provide finite-sample performance guarantees for control policies executed on stochastic robotic systems. Given an open- or closed-loop policy and a finite set of trajectory rollouts under the policy, we bound the expected value, value-at-risk, and conditional-value-at-risk of the trajectory cost, and the probability of failure in a sparse rewards setting. The bounds hold, with user-specified probability, for any policy synthesis technique and can be seen as a post-design safety certification. Generating the bounds only requires sampling simulation rollouts, without assumptions on the distribution or complexity of the underlying stochastic system. We adapt these bounds to also give a constraint satisfaction test to verify safety of the robot system. Furthermore, we extend our method to apply when selecting the best policy from a set of candidates, requiring a multi-hypothesis correction. We show the statistical validity of our bounds in the Ant, Half-cheetah, and Swimmer MuJoCo environments and demonstrate our constraint satisfaction test with the Ant. Finally, using the 20 degree-of-freedom MuJoCo Shadow Hand, we show the necessity of the multi-hypothesis correction.Comment: Submitted to IEEE-TR

    Patch based synthesis for single depth image super-resolution

    Get PDF
    We present an algorithm to synthetically increase the resolution of a solitary depth image using only a generic database of local patches. Modern range sensors measure depths with non-Gaussian noise and at lower starting resolutions than typical visible-light cameras. While patch based approaches for upsampling intensity images continue to improve, this is the first exploration of patching for depth images. We match against the height field of each low resolution input depth patch, and search our database for a list of appropriate high resolution candidate patches. Selecting the right candidate at each location in the depth image is then posed as a Markov random field labeling problem. Our experiments also show how important further depth-specific processing, such as noise removal and correct patch normalization, dramatically improves our results. Perhaps surprisingly, even better results are achieved on a variety of real test scenes by providing our algorithm with only synthetic training depth data

    Financing Early Stage Cleantech Firms

    Get PDF
    T HE report by the Intergovernmental Panel on Climate Change [47] highlighted the need to reduce greenhouse gas emissions and strive for decarbonization in order to restrict global warming. The Paris Agreement, a legally binding international treaty on climate change, has a vision of accelerating technology development and transfer [92] in order to reduce harmful carbon emissions. The development of new and innovative disruptive technologies to ameliorate and reverse the harmful effects of carbon emissions is emphasized by governments and international agencies [6], [53], [59], [102]. Large incumbent firms are well resourced to conduct this research and development (R&D), although small early stage ventures also play a significant role in innovation and invention [67], [76]. New enterprises have the advantages of agility, testing, and implementing new business models quickly [73], although they typically lack sufficient resources to develop and scale their business successfully

    Rating by Ranking: an Improved Scale for Judgement-based Labels

    Get PDF
    Labels representing value judgements are commonly elicited using an interval scale of absolute values. Data collected in such a manner is not always reliable. Psychologists have long recognized a number of biases to which many human raters are prone, and which result in disagreement among raters as to the true gold standard rating of any particular object. We hypothesize that the issues arising from rater bias may be mitigated by treating the data received as an ordered set of preferences rather than a collection of absolute values. We experiment on real-world and artificially generated data, finding that treating label ratings as ordinal, rather than interval data results in an increased inter-rater reliability. This finding has the potential to improve the efficiency of data collection for applications such as Top-N recommender systems; where we are primarily interested in the ranked order of items, rather than the absolute scores which they have been assigned
    corecore