24 research outputs found

    Building a Global Ecosystem Research Infrastructure to Address Global Grand Challenges for Macrosystem Ecology

    Get PDF
    The development of several large-, "continental"-scale ecosystem research infrastructures over recent decades has provided a unique opportunity in the history of ecological science. The Global Ecosystem Research Infrastructure (GERI) is an integrated network of analogous, but independent, site-based ecosystem research infrastructures (ERI) dedicated to better understand the function and change of indicator ecosystems across global biomes. Bringing together these ERIs, harmonizing their respective data and reducing uncertainties enables broader cross-continental ecological research. It will also enhance the research community capabilities to address current and anticipate future global scale ecological challenges. Moreover, increasing the international capabilities of these ERIs goes beyond their original design intent, and is an unexpected added value of these large national investments. Here, we identify specific global grand challenge areas and research trends to advance the ecological frontiers across continents that can be addressed through the federation of these cross-continental-scale ERIs.Peer reviewe

    Symposium & Panel Discussion: Data Citation and Attribution for Reproducible Research in Linguistics

    Get PDF
    Slides from the symposium and panel discussion at the event "Data Citation and Attribution for Reproducible Research in Linguistics," Annual Meeting of the Linguistic Society of America, Austin, TX, 5 January 2017.This material is based upon work supported by the National Science Foundation under grant SMA-1447886

    Ocean FAIR Data Services

    Get PDF
    Well-founded data management systems are of vital importance for ocean observing systems as they ensure that essential data are not only collected but also retained and made accessible for analysis and application by current and future users. Effective data management requires collaboration across activities including observations, metadata and data assembly, quality assurance and control (QA/QC), and data publication that enables local and interoperable discovery and access and secures archiving that guarantees long-term preservation. To achieve this, data should be findable, accessible, interoperable, and reusable (FAIR). Here, we outline how these principles apply to ocean data and illustrate them with a few examples. In recent decades, ocean data managers, in close collaboration with international organizations, have played an active role in the improvement of environmental data standardization, accessibility, and interoperability through different projects, enhancing access to observation data at all stages of the data life cycle and fostering the development of integrated services targeted to research, regulatory, and operational users. As ocean observing systems evolve and an increasing number of autonomous platforms and sensors are deployed, the volume and variety of data increase dramatically. For instance, there are more than 70 data catalogs that contain metadata records for the polar oceans, a situation that makes comprehensive data discovery beyond the capacity of most researchers. To better serve research, operational, and commercial users, more efficient turnaround of quality data in known formats and made available through Web services is necessary. In particular, automation of data workflows will be critical to reduce friction throughout the data value chain. Adhering to the FAIR principles with free, timely, and unrestricted access to ocean observation data is beneficial for the originators, has obvious benefits for users, and is an essential foundation for the development of new services made possible with big data technologies

    Recap of Workshop 1

    Get PDF
    A recap of the first workshop on Developing Standards for Data Citation and Attribution for Reproducible Research in Linguistics, which was held at the University of Colorado in September 2015. Presented at the second workshop on Developing Standards for Data Citation and Attribution for Reproducible Research in Linguistics, held at the University of Texas, April 8-10, 2016.National Science Foundation (NSF-SMA 1447886

    Collecting and Preserving Local and Traditional Climate Knowledge

    No full text
    <p>The Exchange for Local Observations and Knowledge of the Arctic (ELOKA) facilitates the collection, preservation, exchange, and use of local observations and knowledge of the Arctic. Local and Traditional Knowledge (LTK) provides rich information about the Arctic that complements data acquired via conventional quantitative data collection methods. ELOKA seeks to make LTK and community observations discoverable and accessible to community members, scientists, educators, policy makers, and the general public. </p

    01 - Welcome

    No full text
    Outlines the goals of this NSF project, explores the issue of reproducibility in science and linguistics, and explains the distinction between and importance of citation and attribution. Presented at the first workshop on Developing Standards for Data Citation and Attribution for Reproducible Research in Linguistics, held at the University of Colorado at Boulder from September 18-20 2015.This material is based upon work supported by the National Science Foundation under grant SMA-1447886

    Collecting and Preserving Local and Traditional Climate Knowledge

    No full text
    <p>The Exchange for Local Observations and Knowledge of the Arctic (ELOKA) facilitates the collection, preservation, exchange, and use of local observations and knowledge of the Arctic. Local and Traditional Knowledge (LTK) provides rich information about the Arctic that complements data acquired via conventional quantitative data collection methods. ELOKA seeks to make LTK and community observations discoverable and accessible to community members, scientists, educators, policy makers, and the general public. </p

    Today's Data are Part of Tomorrow's Research: Archival Issues in the Sciences

    No full text
    Scientific data are essential for training in science and informed decision-making regarding health, the environment, and the economy. Cumulative data sets assist with understanding trends, frequencies and patterns, and can form a baseline upon which we can develop predictions. This paper discusses the preservation of scientific data, providing an overview of the characteristics of scientific data and scientific-data portals from a variety of fields, with a focus on data quality, particularly accuracy, reliability and authenticity, and how these are captured in metadata. These concepts are broadly defined from both scientific and archival perspectives. Based on an extensive literature review of publications from national and international scientific organizations, government and research funding bodies, and empirical evidence from a selection of InterPARES 2 Case Studies and General Study 10, which investigated thirty-two scientificdata portals, the paper includes a brief examination of machine-base “knowledge representation” (KR) and the potential implications for the preservation of scientific data, with a particular focus on formal ontologies. The paper also discusses the concept of record in the context of Web 2.0 environments, the paucity of scientific data archives, and the lack of funding priorities in this area. It is argued that archivists will have to work closely with scientific-data creators to understand their practices, that data portals are mechanisms that archivists can use to extend their preservation practices, and that it is not technology that is impeding progress regarding the preservation of scientific data; it is a lack of funding, policy, prioritizing, and vision allowing our scientific national resources to be lost. RÉSUMÉ Les donnĂ©es scientifiques sont essentielles Ă  la formation en sciences et Ă  la prise de dĂ©cision Ă©clairĂ©e au sujet de la santĂ©, de l’environnement et de l’économie. Les ensembles de donnĂ©es cumulatives aident Ă  comprendre les tendances, les frĂ©quences et les courants, et ils peuvent servir de base pour dĂ©velopper des prĂ©visions. Cet article se penche sur la prĂ©servation des donnĂ©es scientifiques et des portails de donnĂ©es scientifiques d’un ensemble de domaines, en ciblant la qualitĂ© des donnĂ©es – surtout l’exactitude, la fiabilitĂ© et l’authenticitĂ© – et en examinant comment ces caractĂ©ristiques sont saisies par les mĂ©tadonnĂ©es. Les auteurs donnent des dĂ©finitions gĂ©nĂ©rales de ces concepts, dans des perspectives Ă  la fois scientifiques et archivistiques. À partir d’une recension approfondie de la littĂ©rature sur le sujet (publications provenant d’organisations scientifiques nationales et internationales, d’organismes gouvernementaux et d’organismes de financement, ainsi que des observations empiriques d’un Ă©chantillon d’études de cas d’InterPARES 2 et de « General Study 10 » qui Ă©tudiaient 32 portails de donnĂ©es scientifiques), cet article examine sommairement la « reprĂ©sentation des connaissances » Ă©lectronique (« machine-base “knowledge representation” [KR] ») et les rĂ©percussions possibles sur la prĂ©servation des donnĂ©es scientifiques, avec un accent particulier sur les ontologies formelles. Il prĂ©sente aussi le concept de document dans le contexte d’un environnement Web 2.0, la raretĂ© des archives sur les donnĂ©es scientifiques, et le fait que ce domaine ne figure pas souvent dans les prioritĂ©s de financement. Les auteurs avancent que les archivistes devront travailler de prĂšs avec les scientifiques crĂ©ateurs de donnĂ©es afin de comprendre leurs pratiques; que les portails de donnĂ©es sont des mĂ©canismes dont les archivistes peuvent se servir pour parfaire leurs pratiques de prĂ©servation; et que ce n’est pas la technologie qui empĂȘche le progrĂšs en ce qui concerne les donnĂ©es scientifiques. C’est plutĂŽt le manque de ressources, de politiques, de classement par ordre de prioritĂ©s, et de vision qui occasionne la perte de nos ressources scientifiques nationales
    corecore