117,871 research outputs found

    An Empirical Approach to Cosmological Galaxy Survey Simulation: Application to SPHEREx Low-Resolution Spectroscopy

    Get PDF
    Highly accurate models of the galaxy population over cosmological volumes are necessary in order to predict the performance of upcoming cosmological missions. We present a data-driven model of the galaxy population constrained by deep 0.1-8 ÎŒm\rm \mu m imaging and spectroscopic data in the COSMOS survey, with the immediate goal of simulating the spectroscopic redshift performance of the proposed SPHEREx mission. SPHEREx will obtain over the full-sky R∌41R\sim41 spectrophotometry at moderate spatial resolution (∌6"\sim6") over the wavelength range 0.75-4.18 ÎŒm\rm \mu m and R∌135R\sim135 over the wavelength range 4.18-5 ÎŒm\rm \mu m. We show that our simulation accurately reproduces a range of known galaxy properties, encapsulating the full complexity of the galaxy population and enables realistic, full end-to-end simulations to predict mission performance. Finally, we discuss potential applications of the simulation framework to future cosmology missions and give a description of released data products

    A Neural Model for Generating Natural Language Summaries of Program Subroutines

    Full text link
    Source code summarization -- creating natural language descriptions of source code behavior -- is a rapidly-growing research topic with applications to automatic documentation generation, program comprehension, and software maintenance. Traditional techniques relied on heuristics and templates built manually by human experts. Recently, data-driven approaches based on neural machine translation have largely overtaken template-based systems. But nearly all of these techniques rely almost entirely on programs having good internal documentation; without clear identifier names, the models fail to create good summaries. In this paper, we present a neural model that combines words from code with code structure from an AST. Unlike previous approaches, our model processes each data source as a separate input, which allows the model to learn code structure independent of the text in code. This process helps our approach provide coherent summaries in many cases even when zero internal documentation is provided. We evaluate our technique with a dataset we created from 2.1m Java methods. We find improvement over two baseline techniques from SE literature and one from NLP literature

    Generating collaborative systems for digital libraries: A model-driven approach

    Get PDF
    This is an open access article shared under a Creative Commons Attribution 3.0 Licence (http://creativecommons.org/licenses/by/3.0/). Copyright @ 2010 The Authors.The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domainspecific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework

    EAD - enabling armchair delivery : approaches to encoding finding aids at the University of Liverpool

    Get PDF
    EAD is increasingly being selected as the primary data format for constructing archival finding aids in the British Archive Community as the new technologies and know-how required to encode lists are being embraced in many repositories. One major problem facing archivists, though, is how to convert finding aids held in a variety of formats (including databases, word processed documents and paper lists with no machine readable form) into EAD. This article will discuss the methods used in Special Collections and Archives at the University of Liverpool Library in converting finding aids into EAD. Two main examples will be discussed: firstly, designing database output styles which automatically generate EAD tags to wrap around database fields using the ProCite bibliographic database and secondly, offshore keying of paper lists with the addition of basic EAD tags following a rigorous template designed by Special Collections and Archives staff. Both methods have proved effective and have facilitated the generation of EAD encoded lists for a number of our largest collections. Finally, there will be a brief discussion of our use of native EAD generation using AdeptEdit software and our continuing use of conversion methods

    The pre-launch Planck Sky Model: a model of sky emission at submillimetre to centimetre wavelengths

    Get PDF
    We present the Planck Sky Model (PSM), a parametric model for the generation of all-sky, few arcminute resolution maps of sky emission at submillimetre to centimetre wavelengths, in both intensity and polarisation. Several options are implemented to model the cosmic microwave background, Galactic diffuse emission (synchrotron, free-free, thermal and spinning dust, CO lines), Galactic H-II regions, extragalactic radio sources, dusty galaxies, and thermal and kinetic Sunyaev-Zeldovich signals from clusters of galaxies. Each component is simulated by means of educated interpolations/extrapolations of data sets available at the time of the launch of the Planck mission, complemented by state-of-the-art models of the emission. Distinctive features of the simulations are: spatially varying spectral properties of synchrotron and dust; different spectral parameters for each point source; modeling of the clustering properties of extragalactic sources and of the power spectrum of fluctuations in the cosmic infrared background. The PSM enables the production of random realizations of the sky emission, constrained to match observational data within their uncertainties, and is implemented in a software package that is regularly updated with incoming information from observations. The model is expected to serve as a useful tool for optimizing planned microwave and sub-millimetre surveys and to test data processing and analysis pipelines. It is, in particular, used for the development and validation of data analysis pipelines within the planck collaboration. A version of the software that can be used for simulating the observations for a variety of experiments is made available on a dedicated website.Comment: 35 pages, 31 figure
    • 

    corecore