13,127 research outputs found

    Monitoring land use changes using geo-information : possibilities, methods and adapted techniques

    Get PDF
    Monitoring land use with geographical databases is widely used in decision-making. This report presents the possibilities, methods and adapted techniques using geo-information in monitoring land use changes. The municipality of Soest was chosen as study area and three national land use databases, viz. Top10Vector, CBS land use statistics and LGN, were used. The restrictions of geo-information for monitoring land use changes are indicated. New methods and adapted techniques improve the monitoring result considerably. Providers of geo-information, however, should coordinate on update frequencies, semantic content and spatial resolution to allow better possibilities of monitoring land use by combining data sets

    A format for phylogenetic placements

    Full text link
    We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g. short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements, and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement.Comment: Documents version 3 of the forma

    THE ROLE OF DATA ARCHITECTURE AS A PART OF ENTERPRISE ARCHITECTURE

    Get PDF
    In the early days of computing, technology simply automated manual processes with greater efficiency. The new organizational context provides input into the data architecture and is the primary tool for the management and sharing of enterprise data. It enables architects, data modelers, and stakeholders to identify, classify, and analyze information requirements across the enterprise, allowing the right priorities for data sharing initiatives. Data architecture states how data are persisted, managed, and utilized within an organization. Data architecture is made up of the structure of all corporate data and its relationships to itself and external systems. In far too many situations, the business community has to enlist the assistance of IT to retrieve information due to the community's inconsistency, lack of intuitiveness, or other factors. The goal of any architecture should illustrate how the components of the architecture will fit together and how the system will adapt and evolve over time.data architecture, enterprise architecture, business process planning,databases, business objects.

    Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art

    Get PDF
    Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration methods to make use of the extracted information. Handling uncertainty in extraction and integration process is an important issue to enhance the quality of the data in such integrated systems. This article presents the state of the art of the mentioned areas of research and shows the common grounds and how to integrate information extraction and data integration under uncertainty management cover

    ColabFit Exchange: open-access datasets for data-driven interatomic potentials

    Full text link
    Data-driven (DD) interatomic potentials (IPs) trained on large collections of first principles calculations are rapidly becoming essential tools in the fields of computational materials science and chemistry for performing atomic-scale simulations. Despite this, apart from a few notable exceptions, there is a distinct lack of well-organized, public datasets in common formats available for use with IP development. This deficiency precludes the research community from implementing widespread benchmarking, which is essential for gaining insight into model performance and transferability, while also limiting the development of more general, or even universal, IPs. To address this issue, we introduce the ColabFit Exchange, the first database providing open access to a large collection of systematically organized datasets from multiple domains that is especially designed for IP development. The ColabFit Exchange is publicly available at \url{https://colabfit.org/}, providing a web-based interface for exploring, downloading, and contributing datasets. Composed of data collected from the literature or provided by community researchers, the ColabFit Exchange consists of 106 datasets spanning nearly 70,000 unique chemistries, and is intended to continuously grow. In addition to outlining the software framework used for constructing and accessing the ColabFit Exchange, we also provide analyses of data, quantifying the diversity and proposing metrics for assessing the relative quality and atomic environment coverage of different datasets. Finally, we demonstrate an end-to-end IP development pipeline, utilizing datasets from the ColabFit Exchange, fitting tools from the KLIFF software package, and validation tests provided by the OpenKIM framework

    Full-depth Coadds of the WISE and First-year NEOWISE-Reactivation Images

    Full text link
    The Near Earth Object Wide-field Infrared Survey Explorer (NEOWISE) Reactivation mission released data from its first full year of observations in 2015. This data set includes ~2.5 million exposures in each of W1 and W2, effectively doubling the amount of WISE imaging available at 3.4 and 4.6 microns relative to the AllWISE release. We have created the first ever full-sky set of coadds combining all publicly available W1 and W2 exposures from both the AllWISE and NEOWISE-Reactivation (NEOWISER) mission phases. We employ an adaptation of the unWISE image coaddition framework (Lang 2014), which preserves the native WISE angular resolution and is optimized for forced photometry. By incorporating two additional scans of the entire sky, we not only improve the W1/W2 depths, but also largely eliminate time-dependent artifacts such as off-axis scattered moonlight. We anticipate that our new coadds will have a broad range of applications, including target selection for upcoming spectroscopic cosmology surveys, identification of distant/massive galaxy clusters, and discovery of high-redshift quasars. In particular, our full-depth AllWISE+NEOWISER coadds will be an important input for the Dark Energy Spectroscopic Instrument (DESI) selection of luminous red galaxy and quasar targets. Our full-depth W1/W2 coadds are already in use within the DECam Legacy Survey (DECaLS) and Mayall z-band Legacy Survey (MzLS) reduction pipelines. Much more work still remains in order to fully leverage NEOWISER imaging for astrophysical applications beyond the solar system.Comment: coadds available at http://unwise.me, zoomable full-sky rendering at http://legacysurvey.org/viewe
    • …
    corecore