49,519 research outputs found

    Assessment techniques, database design and software facilities for thermodynamics and diffusion

    Get PDF
    The purpose of this article is to give a set of recommendations to producers of assessed thermodynamic data, who may be involved in either the critical evaluation of limited chemical systems or the creation and dissemination of larger thermodynamic databases. Also, it is hoped that reviewers and editors of scientific publications in this field will find some of the information useful. Good practice in the assessment process is essential, particularly as datasets from many different sources may be combined together into a single database. With this in mind, we highlight some problems that can arise during the assessment process and we propose a quality assurance procedure. It is worth mentioning at this point, that the provision of reliable assessed thermodynamic data relies heavily on the availability of high quality experimental information. The different software packages for thermodynamics and diffusion are described here only briefly

    The Distributed Ontology Language (DOL): Use Cases, Syntax, and Extensibility

    Full text link
    The Distributed Ontology Language (DOL) is currently being standardized within the OntoIOp (Ontology Integration and Interoperability) activity of ISO/TC 37/SC 3. It aims at providing a unified framework for (1) ontologies formalized in heterogeneous logics, (2) modular ontologies, (3) links between ontologies, and (4) annotation of ontologies. This paper presents the current state of DOL's standardization. It focuses on use cases where distributed ontologies enable interoperability and reusability. We demonstrate relevant features of the DOL syntax and semantics and explain how these integrate into existing knowledge engineering environments.Comment: Terminology and Knowledge Engineering Conference (TKE) 2012-06-20 to 2012-06-21 Madrid, Spai

    Using Google Analytics, Voyant and Other Tools to Better Understand Use of Manuscript Collections at L. Tom Perry Special Collections

    Get PDF
    [Excerpt] Developing strategies for making data-driven, objective decisions for digitization and value-added processing. based on patron usage has been an important effort in the L. Tom Perry Special Collections (hereafter Perry Special Collections). In a previous study, the authors looked at how creating a matrix using both Web analytics and in-house use statistics could provide a solid basis for making decisions about which collections to digitize as well as which collections merited deeper description. Along with providing this basis for decision making, the study also revealed some intriguing insights into how our collections were being used and raised some important questions about the impact of description on both digital and physical usage. We have continued analyzing the data from our first study and that data forms the basis of the current study. It is helpful to review the major outcomes of our previous study before looking at what we have learned in this deeper analysis. In the first study, we utilized three sources of statistical data to compare two distinct data points (in-house use and online finding aid use) and determine if there were any patterns or other information that would help curators in the department make better decisions about the items or collections selected for digitization or value-added processing. To obtain our data points, we combined two data sources related to the in-person use of manuscript collections in the Perry Special Collections reading room and one related to the use of finding aids for manuscript collections made available online through the departmentā€™s Finding Aid database ( http://findingaid.lib.byu.edu/). We mapped the resulting data points into a four quadrant graph (see figure 1)

    Weaving creativity into the Semantic Web: a language-processing approach

    Get PDF
    This paper describes a novel language processing ap- proach to the analysis of creativity and the development of a machine-readable ontology of creativity. The ontol- ogy provides a conceptualisation of creativity in terms of a set of fourteen key components or building blocks and has application to research into the nature of cre- ativity in general and to the evaluation of creative prac- tice, in particular. We further argue that the provision of a machine readable conceptualisation of creativity pro- vides a small, but important step towards addressing the problem of automated evaluation, ā€™the Achillesā€™ heel of AI research on creativityā€™ (Boden 1999)

    From Artifacts to Aggregations: Modeling Scientific Life Cycles on the Semantic Web

    Full text link
    In the process of scientific research, many information objects are generated, all of which may remain valuable indefinitely. However, artifacts such as instrument data and associated calibration information may have little value in isolation; their meaning is derived from their relationships to each other. Individual artifacts are best represented as components of a life cycle that is specific to a scientific research domain or project. Current cataloging practices do not describe objects at a sufficient level of granularity nor do they offer the globally persistent identifiers necessary to discover and manage scholarly products with World Wide Web standards. The Open Archives Initiative's Object Reuse and Exchange data model (OAI-ORE) meets these requirements. We demonstrate a conceptual implementation of OAI-ORE to represent the scientific life cycles of embedded networked sensor applications in seismology and environmental sciences. By establishing relationships between publications, data, and contextual research information, we illustrate how to obtain a richer and more realistic view of scientific practices. That view can facilitate new forms of scientific research and learning. Our analysis is framed by studies of scientific practices in a large, multi-disciplinary, multi-university science and engineering research center, the Center for Embedded Networked Sensing (CENS).Comment: 28 pages. To appear in the Journal of the American Society for Information Science and Technology (JASIST

    A survey of statistical network models

    Full text link
    Networks are ubiquitous in science and have become a focal point for discussion in everyday life. Formal statistical models for the analysis of network data have emerged as a major topic of interest in diverse areas of study, and most of these involve a form of graphical representation. Probability models on graphs date back to 1959. Along with empirical studies in social psychology and sociology from the 1960s, these early works generated an active network community and a substantial literature in the 1970s. This effort moved into the statistical literature in the late 1970s and 1980s, and the past decade has seen a burgeoning network literature in statistical physics and computer science. The growth of the World Wide Web and the emergence of online networking communities such as Facebook, MySpace, and LinkedIn, and a host of more specialized professional network communities has intensified interest in the study of networks and network data. Our goal in this review is to provide the reader with an entry point to this burgeoning literature. We begin with an overview of the historical development of statistical network modeling and then we introduce a number of examples that have been studied in the network literature. Our subsequent discussion focuses on a number of prominent static and dynamic network models and their interconnections. We emphasize formal model descriptions, and pay special attention to the interpretation of parameters and their estimation. We end with a description of some open problems and challenges for machine learning and statistics.Comment: 96 pages, 14 figures, 333 reference

    Citation and peer review of data: moving towards formal data publication

    Get PDF
    This paper discusses many of the issues associated with formally publishing data in academia, focusing primarily on the structures that need to be put in place for peer review and formal citation of datasets. Data publication is becoming increasingly important to the scientific community, as it will provide a mechanism for those who create data to receive academic credit for their work and will allow the conclusions arising from an analysis to be more readily verifiable, thus promoting transparency in the scientific process. Peer review of data will also provide a mechanism for ensuring the quality of datasets, and we provide suggestions on the types of activities one expects to see in the peer review of data. A simple taxonomy of data publication methodologies is presented and evaluated, and the paper concludes with a discussion of dataset granularity, transience and semantics, along with a recommended human-readable citation syntax
    • ā€¦
    corecore