3 research outputs found

    Biodiversity Databases

    Get PDF
    Computing and database management has shifted from cottage industry-style methods — the small independent researcher keeping records for a particular project — to state-of-the-art file storage systems, presentation, and distribution over the Internet. New and emerging techniques for recognition, compilation, and data management have made managing data a discipline in its own right. Covering all aspects of this data management, Biodiversity Databases: Techniques, Politics, and Applications brings together input from social scientists, programmers, database designers, and information specialists to delineate the political setting and give institutions platforms for the dissemination of taxonomic information. A practical and logical guide to complex issues, the book explores the changes and challenges of the information age. It discusses projects developed to provide better access to all available biodiversity information. The chapters make the case for the need for representation of concepts in taxonomic databases. They explore issues involved in connecting databases with different user interfaces, the technical demands of linking databases that are not entirely uniform in structure, and the problems of user access and the control of data quality. The book highlights different approaches to addressing concerns associated with the taxonomic impediment and the low reproducibility of taxonomic data. It provides an in-depth examination of the challenge of making taxonomic information more widely available to users in the wider scientific community, in government, and the general population

    Design and Implementation of a Research Data Management System: The CRC/TR32 Project Database (TR32DB)

    Get PDF
    Research data management (RDM) includes all processes and measures which ensure that research data are well-organised, documented, preserved, stored, backed up, accessible, available, and re-usable. Corresponding RDM systems or repositories form the technical framework to support the collection, accurate documentation, storage, back-up, sharing, and provision of research data, which are created in a specific environment, like a research group or institution. The required measures for the implementation of a RDM system vary according to the discipline or purpose of data (re-)use. In the context of RDM, the documentation of research data is an essential duty. This has to be conducted by accurate, standardized, and interoperable metadata to ensure the interpretability, understandability, shareability, and long-lasting usability of the data. RDM is achieving an increasing importance, as digital information increases. New technologies enable to create more digital data, also automatically. Consequently, the volume of digital data, including big data and small data, will approximately double every two years in size. With regard to e-science, this increase of data was entitled and predicted as the data deluge. Furthermore, the paradigm change in science has led to data intensive science. Particularly scientific data that were financed by public funding are significantly demanded to be archived, documented, provided or even open accessible by different policy makers, funding agencies, journals and other institutions. RDM can prevent the loss of data, otherwise around 80-90 % of the generated research data disappear and are not available for re-use or further studies. This will lead to empty archives or RDM systems. The reasons for this course are well known and are of a technical, socio-cultural, and ethical nature, like missing user participation and data sharing knowledge, as well as lack of time or resources. In addition, the fear of exploitation and missing or limited reward for publishing and sharing data has an important role. This thesis presents an approach in handling research data of the collaborative, multidisciplinary, long-term DFG-funded research project Collaborative Research Centre/Transregio 32 (CRC/TR32) “Patterns in Soil-Vegetation-Atmosphere Systems: Monitoring, Modelling, and Data Assimilation”. In this context, a RDM system, the so-called CRC/TR32 project database (TR32DB), was designed and implemented. The TR32DB considers the demands of the project participants (e.g. heterogeneous data from different disciplines with various file sizes) and the requirements of the DFG, as well as general challenges in RDM. For this purpose, a RDM system was established that comprises a well-described self-designed metadata schema, a file-based data storage, a well-elaborated database of metadata, and a corresponding user-friendly web interface. The whole system is developed in close cooperation with the local Regional Computing Centre of the University of Cologne (RRZK), where it is also hosted. The documentation of the research data with accurate metadata is of key importance. For this purpose, an own specific TR32DB Metadata Schema was designed, consisting of multi-level metadata properties. This is distinguished in general and data type specific (e.g. data, publication, report) properties and is developed according to the project background, demands of the various data types, as well as recent associated metadata standards and principles. Consequently, it is interoperable to recent metadata standards, such as the Dublin Core, the DataCite Metadata Schema, as well as core elements of the ISO19115:2003 Metadata Standard and INSPIRE Directive. Furthermore, the schema supports optional, mandatory, and automatically generated metadata properties, as well as it provides predefined, obligatory and self-established controlled vocabulary lists. The integrated mapping to the DataCite Metadata Schema facilitates the simple application of a Digital Object Identifier (DOI) for a dataset. The file-based data storage is organized in a folder system, corresponding to the structure of the CRC/TR32 and additionally distinguishes between several data types (e.g. data, publication, report). It is embedded in the Andrew File System hosted by the RRZK. The file system is capable to store and backup all data, is highly scalable, supports location independence, and enables easy administration by Access Control Lists. In addition, the relational database management system MySQL stores the metadata according to the previous mentioned TR32DB Metadata Schema as well as further necessary administrative data. A user-friendly web-based graphical user interface enables the access to the TR32DB system. The web-interface provides metadata input, search, and download of data, as well as the visualization of important geodata is handled by an internal WebGIS. This web-interface, as well as the entire RDM system, is self-developed and adjusted to the specific demands. Overall, the TR32DB system is developed according to the needs and requirements of the CRC/TR32 scientists, fits the demands of the DFG, and considers general problems and challenges of RDM as well. With regard to changing demands of the CRC/TR32 and technologic advances, the system is and will be consequently further developed. The established TR32DB approach was already successfully applied to another interdisciplinary research project. Thus, this approach is transferable and generally capable to archive all data, generated by the CRC/TR32, with accurately, interoperable metadata to ensure the re-use of the data, beyond the end of the project

    From Data to Knowledge in Secondary Health Care Databases

    Get PDF
    The advent of big data in health care is a topic receiving increasing attention worldwide. In the UK, over the last decade, the National Health Service (NHS) programme for Information Technology has boosted big data by introducing electronic infrastructures in hospitals and GP practices across the country. This ever growing amount of data promises to expand our understanding of the services, processes and research. Potential bene�ts include reducing costs, optimisation of services, knowledge discovery, and patient-centred predictive modelling. This thesis will explore the above by studying over ten years worth of electronic data and systems in a hospital treating over 750 thousand patients a year. The hospital's information systems store routinely collected data, used primarily by health practitioners to support and improve patient care. This raw data is recorded on several di�erent systems but rarely linked or analysed. This thesis explores the secondary uses of such data by undertaking two case studies, one on prostate cancer and another on stroke. The journey from data to knowledge is made in each of the studies by traversing critical steps: data retrieval, linkage, integration, preparation, mining and analysis. Throughout, novel methods and computational techniques are introduced and the value of routinely collected data is assessed. In particular, this thesis discusses in detail the methodological aspects of developing clinical data warehouses from routine heterogeneous data and it introduces methods to model, visualise and analyse the journeys that patients take through care. This work has provided lessons in hospital IT provision, integration, visualisation and analytics of complex electronic patient records and databases and has enabled the use of raw routine data for management decision making and clinical research in both case studies
    corecore