413,845 research outputs found

    From citizen science to policy development on the coral reefs of Jamaica

    Get PDF
    This paper explores the application of citizen science to help generation of scientific data and capacity-building, and so underpin scientific ideas and policy development in the area of coral reef management, on the coral reefs of Jamaica. From 2000 to 2008, ninety Earthwatch volunteers were trained in coral reef data acquisition and analysis and made over 6,000 measurements on fringing reef sites along the north coast of Jamaica. Their work showed that while recruitment of small corals is returning after the major bleaching event of 2005, larger corals are not necessarily so resilient and so need careful management if the reefs are to survive such major extreme events. These findings were used in the development of an action plan for Jamaican coral reefs, presented to the Jamaican National Environmental Protection Agency. It was agreed that a number of themes and tactics need to be implemented in order to facilitate coral reef conservation in the Caribbean. The use of volunteers and citizen scientists from both developed and developing countries can help in forging links which can assist in data collection and analysis and, ultimately, in ecosystem management and policy development

    The Colour of Ocean Data: International Symposium on oceanographic data and information management, with special attention to biological data. Brussels, Belgium, 25-27 November 2002: book of abstracts

    Get PDF
    Ocean data management plays a crucial role in global as well as local matters. The Intergovernmental Oceanographic Commission -with its network of National Oceanographic Data Centres- and the International Council of Scientific Unions- with its World Data Centres- have played a major catalysing role in establishing the existing ocean data management practices. No one can think of data management without thinking of information technology. New developments in computer hard- and software force us to continually rethink the way we manage ocean data. One of the major challenges in this is to try and close the gap between the haves and the have-nots, and to assist scientists in less fortunate countries to manage oceanographic data flows in a suitable and timely fashion. So far major emphasis has been on the standardisation and exchange of physical oceanographic data in open ocean conditions. But the colour of the ocean data is changing. The ‘blue’ ocean sciences get increasingly interested in including geological, chemical and biological data. Moreover the shallow sea areas get more and more attention as highly productive biological areas that need to be seen in close association with the deep seas. How to fill in the gap of widely accepted standards for data structures that can serve the deep ‘blue’ and the shallow ‘green’ biological data management is a major issue that has to be addressed. And there is more: data has to be turned into information. In the context of ocean data management, scientists, data managers and decision makers are all very much dependent on each other. Decision makers will stimulate research topics with policy priority and hence guide researchers. Scientists need to provide data managers with reliable and first quality controlled data in such a way that the latter can translate and make them available for the decision makers. But do they speak the same ‘language’? Are they happy with the access they have to the data? And if not, can they learn from each other’s expectations and experience? The objective of this symposium is to harmonize ocean colours and languages and create a forum for data managers, scientists and decision makers with a major interest in oceanography, and open to everyone interested in ocean data management

    FAIR Data Pipeline: provenance-driven data management for traceable scientific workflows

    Get PDF
    Modern epidemiological analyses to understand and combat the spread of disease depend critically on access to, and use of, data. Rapidly evolving data, such as data streams changing during a disease outbreak, are particularly challenging. Data management is further complicated by data being imprecisely identified when used. Public trust in policy decisions resulting from such analyses is easily damaged and is often low, with cynicism arising where claims of "following the science" are made without accompanying evidence. Tracing the provenance of such decisions back through open software to primary data would clarify this evidence, enhancing the transparency of the decision-making process. Here, we demonstrate a Findable, Accessible, Interoperable and Reusable (FAIR) data pipeline developed during the COVID-19 pandemic that allows easy annotation of data as they are consumed by analyses, while tracing the provenance of scientific outputs back through the analytical source code to data sources. Such a tool provides a mechanism for the public, and fellow scientists, to better assess the trust that should be placed in scientific evidence, while allowing scientists to support policy-makers in openly justifying their decisions. We believe that tools such as this should be promoted for use across all areas of policy-facing research

    Extracting Signal from the Noisy Environment of an Ecosystem

    Get PDF
    The collection and storage of environmental and ecological data by researchers, government agencies and stewardship groups over the last decade has been remarkable. The proportional challenge to this data accretion lies in capitalizing on these resources for significant gain for both stewards and stakeholders. These trends highlight the role of data science as a critical component to the future of data-driven environmental management. Most critical are models of how data scientists can collaborate with policy makers and stewards to offer tools that leverage data and facilitate decisions. Our project aims to show how a successful collaboration between a management group, the Susquehanna River Basin Commission (SRBC), and an academic group of data scientists resulted in that clarifying insight. The mandate of SRBC is to manage stakeholder requirements while sustaining a healthy ecosystem. The challenge was to differentiate signal events in water quality measurement data from the noisy dynamics of a monitored complex system in a manner that could be applied to other ecosystems. Through the application of generalized additive models (GAM), we were able to clarify the relationship between environmental dynamics and two critical biological communities (macroinvertebrates and fish) that live within the watershed. The GAM model sensitivity was sufficient to identify signal from the noise, and flexible enough to operate across the spatial extent of the ecosystem. By identifying signal events, environmental stewards and policy makers will be able to define thresholds that need to be monitored to reduce pollution & raise diversity in the ecosystem

    Policy-based SLA storage management model for distributed data storage services

    Get PDF
    There is  high demand for storage related services supporting scientists in their research activities. Those services are expected to provide not only capacity but also features allowing for more flexible and cost efficient usage. Such features include easy multiplatform data access, long term data retention, support for performance and cost differentiating of SLA restricted data access. The paper presents a policy-based SLA storage management model for distributed data storage services. The model allows for automated management of distributed data aimed at QoS provisioning with no strict resource reservation. The problem of providing  users with the required QoS requirements is complex, and therefore the model implements heuristic approach  for solving it. The corresponding system architecture, metrics and methods for SLA focused storage management are developed and tested in a real, nationwide environment

    Southern African Treatment Resistance Network (SATuRN) RegaDB HIV drug resistance and clinical management database: supporting patient management, surveillance and research in southern Africa

    Get PDF
    Substantial amounts of data have been generated from patient management and academic exercises designed to better understand the human immunodeficiency virus (HIV) epidemic and design interventions to control it. A number of specialized databases have been designed to manage huge data sets from HIV cohort, vaccine, host genomic and drug resistance studies. Besides databases from cohort studies, most of the online databases contain limited curated data and are thus sequence repositories. HIV drug resistance has been shown to have a great potential to derail the progress made thus far through antiretroviral therapy. Thus, a lot of resources have been invested in generating drug resistance data for patient management and surveillance purposes. Unfortunately, most of the data currently available relate to subtype B even though >60% of the epidemic is caused by HIV-1 subtype C. A consortium of clinicians, scientists, public health experts and policy markers working in southern Africa came together and formed a network, the Southern African Treatment and Resistance Network (SATuRN), with the aim of increasing curated HIV-1 subtype C and tuberculosis drug resistance data. This article describes the HIV-1 data curation process using the SATuRN Rega database. The data curation is a manual and time-consuming process done by clinical, laboratory and data curation specialists. Access to the highly curated data sets is through applications that are reviewed by the SATuRN executive committee. Examples of research outputs from the analysis of the curated data include trends in the level of transmitted drug resistance in South Africa, analysis of the levels of acquired resistance among patients failing therapy and factors associated with the absence of genotypic evidence of drug resistance among patients failing therapy. All these studies have been important for informing first- and second-line therapy. This database is a free password-protected open source database available on www.bioafrica.net

    How is data science involved in policy analysis?: A bibliometric perspective

    Full text link
    © 2018 Portland International Conference on Management of Engineering and Technology, Inc. (PICMET). What are the implications of big data in terms of big impacts? Our research focuses on the question, 'How are data analytics involved in policy analysis to create complementary values?' We address this from the perspective of bibliometrics. We initially investigate a set of articles published in Nature and Science, seeking cutting-edge knowledge to sharpen research hypotheses on what data science offers policy analysis. Based on a set of bibliometric models (e.g., topic analysis, scientific evolutionary pathways, and social network analysis), we follow up with studies addressing two aspects: (1) we examine the engagement of data science (including statistical, econometric, and computing approaches) in current policy analyses by analyzing articles published in top-level journals in the areas of political science and public administration; and (2) we examine the development of policy analysis-oriented data analytic models in top-level journals associated with computer science (including both artificial intelligence and information systems). Observations indicate that data science contribution to policy analysis is still an emerging area. Data scientists are moving further than policy analysts, due to technical difficulties in exploiting data analytic models. Integrating artificial intelligence with econometrics is identified as a particularly promising direction

    Towards better integration of environmental science in society: lessons from BONUS, the joint Baltic Sea environmental research and development programme

    Get PDF
    Integration of environmental science in society is impeded by the large gap between science and policy that is characterised by weaknesses in societal relevance and dissemination of science and its practical implementation in policy. We analyse experiences from BONUS, the policy-driven joint Baltic Sea research and development programme (2007–2020), which is part of the European Research Area (ERA) and involves combined research funding by eight EU member states. The ERA process decreased fragmentation of Baltic Sea science and BONUS funding increased the scientific quality and societal relevance of Baltic Sea science and strengthened the science-policy interface. Acknowledging the different drivers for science producers (academic career, need for funding, peer review) and science users (fast results fitting policy windows), and realising that most scientists aim at building conceptual understanding rather than instrumental use, bridges can be built through strategic planning, coordination and integration. This requires strong programme governance stretching far beyond selecting projects for funding, such as coaching, facilitating the sharing of infrastructure and data and iterative networking within and between science producer and user groups in all programme phases. Instruments of critical importance for successful science-society integration were identified as: (1) coordinating a strategic research agenda with strong inputs from science, policy and management, (2) providing platforms where science and policy can meet, (3) requiring cooperation between scientists to decrease fragmentation, increase quality, clarify uncertainties and increase consensus about environmental problems, (4) encouraging and supporting scientists in disseminating their results through audience-tailored channels, and (5) funding not only primary research but also synthesis projects that evaluate the scientific findings and their practical use in society – in close cooperation with science users − to enhance relevance, credibility and legitimacy of environmental science and expand its practical implementation

    We\u27re Working On It: Transferring the Sloan Digital Sky Survey from Laboratory to Library

    Get PDF
    This article reports on the transfer of a massive scientific dataset from a national laboratory to a university library, and from one kind of workforce to another. We use the transfer of the Sloan Digital Sky Survey (SDSS) archive to examine the emergence of a new workforce for scientific research data management. Many individuals with diverse educational backgrounds and domain experience are involved in SDSS data management: domain scientists, computer scientists, software and systems engineers, programmers, and librarians. These types of positions have been described using terms such as research technologist, data scientist, e-science professional, data curator, and more. The findings reported here are based on semi-structured interviews, ethnographic participant observation, and archival studies from 2011-2013. The library staff conducting the data storage and archiving of the SDSS archive faced two performance problems. The preservation specialist and the system administrator worked together closely to discover and implement solutions to the slow data transfer and verification processes. The team overcame these slow-downs by problem solving, working in a team, and writing code. The library team lacked the astronomy domain knowledge necessary to meet some of their preservation and curation goals. The case study reveals the variety of expertise, experience, and individuals essential to the SDSS data management process. A variety of backgrounds and educational histories emerge in the data managers studied. Teamwork is necessary to bring disparate expertise together, especially between those with technical and domain education. The findings have implications for data management education, policy and relevant stakeholders. This article is part of continuing research on Knowledge Infrastructures
    corecore