7,996 research outputs found

    Extraction and parsing of herbarium specimen data: Exploring the use of the Dublin core application profile framework

    Get PDF
    Herbaria around the world house millions of plant specimens; botanists and other researchers value these resources as ingredients in biodiversity research. Even when the specimen sheets are digitized and made available online, the critical information about the specimen stored on the sheet are not in a usable (i.e., machine-processible) form. This paper describes a current research and development project that is designing and testing high-throughput workflows that combine machine- and human-processes to extract and parse the specimen label data. The primary focus of the paper is the metadata needs for the workflow and the creation of the structured metadata records describing the plant specimen. In the project, we are exploring the use of the new Dublin Core Metadata Initiative framework for application profiles. First articulated as the Singapore Framework for Dublin Core Application Profiles in 2007, the use of this framework is in its infancy. The promises of this framework for maximum interoperability and for documenting the use of metadata for maximum reusability, and for supporting metadata applications that are in conformance with Web architectural principles provide the incentive to explore and add implementation experience regarding this new framework

    Transformation Report: The missing Standard for Data Exchange

    Get PDF
    The data exchange with STEP ISO 10303 is state of the art, but it is still a fundamental problem to guarantee a given quality of service to integrated operational and informational applications. In STEP there are defined descriptive methods, data specifications, implementation resources and conformance testing, but there is nothing to document how the data is processed. A success report of the mapped data from the source to the target tool is missing. In this paper we introduce a Transformation Report for documenting the data transformation from the source to the target tool. With this report the trustworthiness of the received data can be significantly improved by documenting the data loss, semantic and syntactic errors. With the information in the report it should be possible to infer the proper value to define rules that fix the data after it has been determined to be incorrect or to find a suitable data integrations strategy into a target tool or repository. The intention of the paper is to suggest a standardised Transformation Report, that can be automatically processed and that contains all information for an automated reconciliation process

    Improving NRM Investment through a policy performance lens

    Get PDF
    Choosing a mechanism to encourage landholders to change their land management in order to deliver environmental outcomes is a complicated process. Careful instrument selection may count for little if uptake and adoption are insufficient to meet performance targets. Similarly, investors may require assurance that the proposed investment will deliver the stated goals. In order to reduce the uptake uncertainty facing policy makers we evaluate and describe several possible methods to guide and frame adoption targets. We conclude that referring to past adoption experience of a wide range of mechanisms offers the best approach to setting feasible adoption targets for future mechanisms. We call this adoption points of reference. This approach is tested by application to mechanisms focusing on delivering water quality improvements in GBR catchments. We conclude that the points of reference approach is appropriate and useful but should be supported by processes designed to incorporate the impact of heterogeneity and local knowledge and an emphasis on improving the accuracy of future data.adoption targets, NRM investment, reasonable assurance, water quality,

    Documenting numerical experiments in support of the Coupled Model Intercomparison Project Phase 6 (CMIP6)

    Get PDF
    Numerical simulation, and in particular simulation of the earth system, relies on contributions from diverse communities, from those who develop models to those involved in devising, executing, and analysing numerical experiments. Often these people work in different institutions and may be working with significant separation in time (particularly analysts, who may be working on data produced years earlier), and they typically communicate via published information (whether journal papers, technical notes, or websites). The complexity of the models, experiments, and methodologies, along with the diversity (and sometimes inexact nature) of information sources, can easily lead to misinterpretation of what was actually intended or done. In this paper we introduce a taxonomy of terms for more clearly defining numerical experiments, put it in the context of previous work on experimental ontologies, and describe how we have used it to document the experiments of the sixth phase for the Coupled Model Intercomparison Project (CMIP6). We describe how, through iteration with a range of CMIP6 stakeholders, we rationalized multiple sources of information and improved the clarity of experimental definitions. We demonstrate how this process has added value to CMIP6 itself by (a) helping those devising experiments to be clear about their goals and their implementation, (b) making it easier for those executing experiments to know what is intended, (c) exposing interrelationships between experiments, and (d) making it clearer for third parties (data users) to understand the CMIP6 experiments. We conclude with some lessons learnt and how these may be applied to future CMIP phases as well as other modelling campaigns

    Software Testing and Documenting Automation

    Get PDF
    This article describes some approaches to problem of testing and documenting automation in information systems with graphical user interface. Combination of data mining methods and theory of finite state machines is used for testing automation. Automated creation of software documentation is based on using metadata in documented system. Metadata is built on graph model. Described approaches improve performance and quality of testing and documenting processes
    • 

    corecore