9 research outputs found

    Visible Evidence of Invisible Quality Dimensions and the Role of Data Management

    Get PDF
    P ast research ha s shown that data reusers are concerned with the issue of data quality and the identified attributes of quality. While data reusers find evidence of the attributes of data quality during their assessment of data for reuse, there may be other dimensions of data quality that reusers are concern ed about but that are not always visible to them. This study explores these invisible dimensions of data quality that have been identified by data reusers. The findings of this study indicate that data reusers are concern ed with two kinds of invisible characteristics for assessing the data : the efforts put on data , and the ethics behind the data. While these quality dimensions cannot be easily measured at face - level, data reusers find proxy evidence that indicate s the presence of these invisibilities. This finding signifies the role of data management that can make these invisible data qualit ies visibl

    Visible evidence of invisible quality dimensions and the role of data management

    Get PDF
    Past research has shown that data reusers are concerned with the issue of data quality and the identified attributes of quality. While data reusers find evidence of the attributes of data quality during their assessment of data for reuse, there may be other dimensions of data quality that reusers are concerned about but that are not always visible to them. This study explores these invisible dimensions of data quality that have been identified by data reusers. The findings of this study indicate that data reusers are concerned with two kinds of invisible characteristics for assessing the data: the efforts put on data, and the ethics behind the data. While these quality dimensions cannot be easily measured at face-level, data reusers find proxy evidence that indicates the presence of these invisibilities. This finding signifies the role of data management that can make these invisible data qualities visible

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    How is encyclopaedia authority established?

    Get PDF
    I embarked on this research because I wanted to explore the basis of textual authority. Such an understanding is particularly important in a world where there is such an overload of information that it is a challenge for the public to identify which publications to choose when looking for specific information. I decided to look at the case of encyclopaedias because of the widespread belief that encyclopaedias are the ultimate authorities. I also made the choice based on the observation that, besides the research on Wikipedia, the scientific community seems to overlook encyclopaedias, despite of the role these latter play as key sources of information for the general public. Two theories are combined to serve as a framework for the thesis. On the one hand, there is the theory of cognitive authority as defined by Józef Maria Bocheński, Richard De George, and Patrick Wilson. On the other hand, there is the theory of quality as defined from the various frameworks recommended by librarians and information scientists on how to assess the quality of reference works. These two theoretical frameworks are used to deconstruct the concept of authority and to highlight aspects of authority which may be particularly worthy of investigation. In this thesis, studies were conducted on the following: (1) a literature review on the origin and evolution of encyclopaedia authority throughout the history of encyclopaedia, (2) a review of previous research pertaining to the quality and the authority of Wikipedia, (3) an analysis of the publishing and dissemination of science and technology encyclopaedias published in the 21st century throughout worldwide libraries, (4) a survey of perspective of encyclopaedia authors on the role of encyclopaedias in society and on the communication of scientific uncertainties and controversies, and (5) an analysis of book reviews towards a general assessment of encyclopaedia quality. The thesis illustrates how a concept such as authority which is typically taken for granted can actually be more complex and more problematic than it appears, thereby challenging widespread beliefs in society. In particular, the thesis pinpoints potential contradictions regarding the importance of the author and the publishers in ensuring encyclopaedia authority. On a theoretical level, the thesis revisits the concept of cognitive authority and initiates a discussion on the complex interaction between authority and quality. On a more pragmatic level, the thesis contributes towards the creation of guidelines for encyclopaedia development. As an exploratory study, the thesis also identifies a range of areas which should be of priority for future research

    Überlegungen zu einem Bewertungssystem für Forschungsdatenpublikationen unter Einbezug der FAIR-Prinzipien

    Get PDF
    Forschungsdaten sind wesentlicher Bestandteil des Wissenschaftskreislaufes. Sie sind eine der wesentlichsten Grundlagen von Forschung und damit auch Ausgangspunkt neuer Erkenntnisse, von Innovationen und dadurch nicht zuletzt auch eine Basis wirtschaftlichen Fortschritts. Gleichermaßen können Forschungsdaten durch ihre Einzigartigkeit (bspw. Observationsdaten) Schaufenster in die Vergangenheit sein und dabei helfen, die Zukunft zu antizipieren. Daten werden mithin als einer der Rohstoffe des 21. Jahrhunderts bezeichnet und geraten vermehrt in den Fokus verschiedener Stakeholder. Insbesondere im Rahmen öffentlich finanzierter Wissenschaft und Forschung werden die Anforderungen an Forschende immer größer, die aus der geförderten Tätigkeit entstandenen Forschungsdaten unter dem Paradigma der Open Science offen zur Verfügung zu stellen. Daraus erwachsen große Herausforderungen für Datenproduzent:innen. Denn die qualitative Aufbereitung von Forschungsdaten zum Zwecke der Publikation, um die Auffindbarkeit und sinnvolle Nachnutzung durch Dritte zu ermöglichen, stellt eine Aufgabe mit teils erheblichen Zusatzaufwand bei stark begrenzten zeitlichen Ressourcen dar. Die Anerkennung dieser Zusatzleistung in der Bewertung wissenschaftlicher Leistungen könnte einen wichtigen Anreiz für das Erbringen dieses zusätzlichen Aufwandes bieten. Die vorliegende Arbeit geht der Frage nach, wie ein Bewertungssystem für Forschungsdatenpublikationen aussehen könnte und nimmt dabei auch aktuell genutzte publikationsbezogene Metriken und Indikatoren kritisch in den Blick. Insbesondere wird diskutiert, ob die FAIR-Prinzipien bei der Erarbeitung eines Rahmenwerkes für die Qualität von Datenpublikationen operationalisiert werden können. Abschließend wird die Idee eines „Data Score" vorgestellt. Die Arbeit fokussiert auf Datenpublikationen in den Geowissenschaften und bezieht dementsprechend die Expertise eines Ausschnitts der deutschen geowissenschaftlichen Fachgemeinschaft in Form einer Befragung ein

    Research Data Curation and Management Bibliography

    Get PDF
    This e-book includes over 800 selected English-language articles and books that are useful in understanding the curation of digital research data in academic and other research institutions. It covers topics such as research data creation, acquisition, metadata, provenance, repositories, management, policies, support services, funding agency requirements, open access, peer review, publication, citation, sharing, reuse, and preservation. It has live links to included works. Abstracts are included in this bibliography if a work is under certain Creative Commons Licenses. This book is licensed under a Creative Commons Attribution 4.0 International License. Cite as: Bailey, Charles W., Jr. Research Data Curation and Management Bibliography. Houston: Digital Scholarship, 2021

    Acquiring High Quality Research Data

    No full text
    At present, data publication is one of the most dynamic topics in e-Research. While the fundamental problems of electronic text publication have been solved in the past decade, standards for the external and internal organisation of data repositories are advanced in some research disciplines but underdeveloped in others. We discuss the differences between an electronic text publication and a data publication and the challenges that result from these differences for the data publication process. We place the data publication process in the context of the human knowledge spiral and discuss key factors for the successful acquisition of research data from the point of view of a data repository. For the relevant activities of the publication process, we list some of the measures and best practices of successful data repositories
    corecore