106 research outputs found

    Building on Progress - Expanding the Research Infrastructure for the Social, Economic, and Behavioral Sciences. Vol. 1

    Get PDF
    The publication provides a comprehensive compendium of the current state of Germany's research infrastructure in the social, economic, and behavioural sciences. In addition, the book presents detailed discussions of the current needs of empirical researchers in these fields and opportunities for future development. The book contains 68 advisory reports by more than 100 internationally recognized authors from a wide range of fields and recommendations by the German Data Forum (RatSWD) on how to improve the research infrastructure so as to create conditions ideal for making Germany's social, economic, and behavioral sciences more innovative and internationally competitive. The German Data Forum (RatSWD) has discussed the broad spectrum of issues covered by these advisory reports extensively, and has developed general recommendations on how to expand the research infrastructure to meet the needs of scholars in the social and economic sciences

    Planning for the Lifecycle Management and Long-Term Preservation of Research Data: A Federated Approach

    Get PDF
    Outcomes of the grant are archived here.The “data deluge” is a recent but increasingly well-understood phenomenon of scientific and social inquiry. Large-scale research instruments extend our observational power by many orders of magnitude but at the same time generate massive amounts of data. Researchers work feverishly to document and preserve changing or disappearing habitats, cultures, languages, and artifacts resulting in volumes of media in various formats. New software tools mine a growing universe of historical and modern texts and connect the dots in our semantic environment. Libraries, archives, and museums undertake digitization programs creating broad access to unique cultural heritage resources for research. Global-scale research collaborations with hundreds or thousands of participants, drive the creation of massive amounts of data, most of which cannot be recreated if lost. The University of Kansas (KU) Libraries in collaboration with two partners, the Greater Western Library Alliance (GWLA) and the Great Plains Network (GPN), received an IMLS National Leadership Grant designed to leverage collective strengths and create a proposal for a scalable and federated approach to the lifecycle management of research data based on the needs of GPN and GWLA member institutions.Institute for Museum and Library Services LG-51-12-0695-1

    Provenance of "after the fact" harmonised community-based demographic and HIV surveillance data from ALPHA cohorts

    Get PDF
    Background: Data about data, metadata, for describing Health and Demographic Surveillance System (HDSS) data have often received insufficient attention. This thesis studied how to develop provenance metadata within the context of HDSS data harmonisation - the network for Analysing Longitudinal Population-based HIV/ AIDS data on Africa (ALPHA). Technologies from the data documentation community were customised, among them: A process model - Generic Longitudinal Business Process Model (GLBPM), two metadata standards - Data Documentation Initiative (DDI) and Standard for Data and Metadata eXchange (SDMX) and a data transformations description language - Structured Data Transform Language (SDTL). Methods: A framework with three complementary facets was used: Creating a recipe for annotating primary HDSS data using the GLBPM and DDI; Approaches for documenting data transformations. At a business level, prospective and retrospective documentation using GLBPM and DDI and retrospectively recovering the more granular details using SDMX and SDTL; Requirements analysis for a user-friendly provenance metadata browser. Results: A recipe for the annotation of HDSS data was created outlining considerations to guide HDSS on metadata entry, staff training and software costs. Regarding data transformations, at a business level, a specialised process model for the HDSS domain was created. It has algorithm steps for each data transformation sub-process and data inputs and outputs. At a lower level, the SDMX and SDTL captured about 80% (17/21) of the variable level transformations. The requirements elicitation study yielded requirements for a provenance metadata browser to guide developers. Conclusions: This is a first attempt ever at creating detailed metadata for this resource or any other similar resources in this field. HDSS can implement these recipes to document their data. This will increase transparency and facilitate reuse thus potentially bringing down costs of data management. It will arguably promote the longevity and wide and accurate use of these data

    Validation Framework for RDF-based Constraint Languages

    Get PDF
    In this thesis, a validation framework is introduced that enables to consistently execute RDF-based constraint languages on RDF data and to formulate constraints of any type. The framework reduces the representation of constraints to the absolute minimum, is based on formal logics, consists of a small lightweight vocabulary, and ensures consistency regarding validation results and enables constraint transformations for each constraint type across RDF-based constraint languages

    Education alignment

    Get PDF
    This essay reviews recent developments in embedding data management and curation skills into information technology, library and information science, and research-based postgraduate courses in various national contexts. The essay also investigates means of joining up formal education with professional development training opportunities more coherently. The potential for using professional internships as a means of improving communication and understanding between disciplines is also explored. A key aim of this essay is to identify what level of complementarity is needed across various disciplines to most effectively and efficiently support the entire data curation lifecycle

    The Elements of Big Data Value

    Get PDF
    This open access book presents the foundations of the Big Data research and innovation ecosystem and the associated enablers that facilitate delivering value from data for business and society. It provides insights into the key elements for research and innovation, technical architectures, business models, skills, and best practices to support the creation of data-driven solutions and organizations. The book is a compilation of selected high-quality chapters covering best practices, technologies, experiences, and practical recommendations on research and innovation for big data. The contributions are grouped into four parts: · Part I: Ecosystem Elements of Big Data Value focuses on establishing the big data value ecosystem using a holistic approach to make it attractive and valuable to all stakeholders. · Part II: Research and Innovation Elements of Big Data Value details the key technical and capability challenges to be addressed for delivering big data value. · Part III: Business, Policy, and Societal Elements of Big Data Value investigates the need to make more efficient use of big data and understanding that data is an asset that has significant potential for the economy and society. · Part IV: Emerging Elements of Big Data Value explores the critical elements to maximizing the future potential of big data value. Overall, readers are provided with insights which can support them in creating data-driven solutions, organizations, and productive data ecosystems. The material represents the results of a collective effort undertaken by the European data community as part of the Big Data Value Public-Private Partnership (PPP) between the European Commission and the Big Data Value Association (BDVA) to boost data-driven digital transformation

    Sept. 2000

    Get PDF
    corecore