22 research outputs found

    Experiences in deploying metadata analysis tools for institutional repositories

    Get PDF
    Current institutional repository software provides few tools to help metadata librarians understand and analyze their collections. In this article, we compare and contrast metadata analysis tools that were developed simultaneously, but independently, at two New Zealand institutions during a period of national investment in research repositories: the Metadata Analysis Tool (MAT) at The University of Waikato, and the Kiwi Research Information Service (KRIS) at the National Library of New Zealand. The tools have many similarities: they are convenient, online, on-demand services that harvest metadata using OAI-PMH; they were developed in response to feedback from repository administrators; and they both help pinpoint specific metadata errors as well as generating summary statistics. They also have significant differences: one is a dedicated tool wheres the other is part of a wider access tool; one gives a holistic view of the metadata whereas the other looks for specific problems; one seeks patterns in the data values whereas the other checks that those values conform to metadata standards. Both tools work in a complementary manner to existing Web-based administration tools. We have observed that discovery and correction of metadata errors can be quickly achieved by switching Web browser views from the analysis tool to the repository interface, and back. We summarize the findings from both tools' deployment into a checklist of requirements for metadata analysis tools

    Experiences in deploying metadata analysis tools for institutional repositories

    Get PDF
    Current institutional repository software provides few tools to help metadata librarians understand and analyze their collections. In this article, we compare and contrast metadata analysis tools that were developed simultaneously, but independently, at two New Zealand institutions during a period of national investment in research repositories: the Metadata Analysis Tool (MAT) at The University of Waikato, and the Kiwi Research Information Service (KRIS) at the National Library of New Zealand. The tools have many similarities: they are convenient, online, on-demand services that harvest metadata using OAI-PMH; they were developed in response to feedback from repository administrators; and they both help pinpoint specific metadata errors as well as generating summary statistics. They also have significant differences: one is a dedicated tool wheres the other is part of a wider access tool; one gives a holistic view of the metadata whereas the other looks for specific problems; one seeks patterns in the data values whereas the other checks that those values conform to metadata standards. Both tools work in a complementary manner to existing Web-based administration tools. We have observed that discovery and correction of metadata errors can be quickly achieved by switching Web browser views from the analysis tool to the repository interface, and back. We summarize the findings from both tools' deployment into a checklist of requirements for metadata analysis tools

    A tool for metadata analysis

    Get PDF
    We describe a Web-based metadata quality tool that provides statistical descriptions and visualisations of Dublin Core metadata harvested via the OAI protocol. The lightweight nature of development allows it to be used to gather contextualized requirements and some initial user feedback is discussed

    Mapping the Metadata challenges in Libraries: A systematic review

    Get PDF
    Background. In the information and knowledge world, libraries always played their important role and found as early adopters of new techniques and technologies for dissemination of information. Purpose. If we understand the metadata as a researcher’s perspective, it is exploratory in its nature which provides guidance to the further data which is explanatory. There are many metadata challenges which affect the execution and accessibility of relevant data. These challenges must be recognized at one place so that LIS professionals having interest in metadata could be able to understand these challenges and hurdles concerning with libraries. So, this this study is being conducted to find out the challenges of metadata and bring these challenges synthetically from scattered literature for the readers. Design/methodology/approach. To compete this study, a systematic literature review approach has been followed. Thirteen paper are selected to find out the challenges faced by the libraries concerning with the metadata. Findings. In this systematic review 85 challenges were found from the scholarly published literature which are categorized into 19 categories according to their nature and likeliness. Further, general challenges and project based challenges are presented separately. Practical implications. Through this study scattered challenges of metadata faced by the libraries are grouped together to strengthen the lacking information. This paper will add knowledge in the existing literature in form of comprehensiveness

    From Idea to Functional ETD: Experiences from the University of Novi Sad, Serbia

    Get PDF
    This paper reviews different phases of introducing and usage of Electronic Theses and Dissertations – ETD at the University of Novi Sad with special emphasis on specific requirements, challenges and further directions of development and use of ETD systems at the University

    Selecting Link Resolver and Knowledge Base Software: Implications of Interoperability

    Get PDF
    Link resolver software and their associated knowledge bases are essential technologies for modern academic libraries. However, because of the increasing number of possible integrations involving link resolver software and knowledge bases, a library’s vendor relationships, product choices, and consortial arrangements may have the most dramatic effects on the user experience and back-end maintenance workloads. A project team at a large comprehensive university recently investigated link resolver products in an attempt to increase efficiency of back-end workflows while maintaining or improving the patron experience. The methodology used for product comparison may be useful for other libraries

    DINÁMICAS DE REPRODUCCIÓN DE ERRATAS: OPTIMIZACIÓN DE LA CORRECCIÓN FORMAL EN TRES DICCIONARIOS ESPECIALIZADOS BILINGÜES

    Get PDF
    It is only through an extreme concern for accuracy and the understanding of typographical errors that authors can turn specialised dictionaries into high quality reference works. This paper describes patterns of typographical error reproduction in three specialised English-Spanish dictionaries. We approach intratextual error reproduction (within a particular dictionary), either through related subentries or through non-related subentries. In addition, we compare the frequency of errors between dictionaries written by institutional lexicographers and works written by freelance professionals. The purpose is to provide a model for typographical error detection and analysis that may contribute to formal correctness in reference works. The reason is twofold: a) dictionaries are expected to be high-standard primary tools for language professionals; b) data quality is essential for a wide variety of utilities, ranging from dictionary writing systems and writing assistants to corpus tools.Los diccionarios especializados no pueden ser considerados obras de referencia de calidad si sus autores no prestan una especial atención a la corrección y si no entienden el fenómeno de la reproducción de las erratas. Este artículo describe patrones de reproducción de erratas en tres diccionarios especializados inglés-español. Abordamos la reproducción intratextual de erratas (en un diccionario en particular), tanto en subentradas relacionadas como no relacionadas. Además, comparamos la frecuencia de erratas en diccionarios elaborados por lexicógrafos institucionales con la de obras realizadas por profesionales independientes. El objetivo es ofrecer un modelo de detección y análisis de erratas que contribuya a la corrección formal en obras de referencia, por dos motivos: a) se supone que los diccionarios deben ser herramientas esenciales de alto nivel para los profesionales del lenguaje; b) la calidad de los datos es fundamental para una amplia gama de herramientas, desde programas de elaboración de diccionarios (dictionary writing systems) hasta asistentes de escritura y herramientas relacionadas con córpora.It is only through an extreme concern for accuracy and the understanding of typographical errors that authors can turn specialised dictionaries into high quality reference works. This paper describes patterns of typographical error reproduction in three specialised English-Spanish dictionaries. We approach intratextual error reproduction (within a particular dictionary), either through related subentries or through non-related subentries. In addition, we compare the frequency of errors between dictionaries written by institutional lexicographers and works written by freelance professionals. The purpose is to provide a model for typographical error detection and analysis that may contribute to formal correctness in reference works. The reason is twofold: a) dictionaries are expected to be high-standard primary tools for language professionals; b) data quality is essential for a wide variety of utilities, ranging from dictionary writing systems and writing assistants to corpus tools

    Institutional Repositories in Scholarly Communication: a literature review on models, issues and current trends

    Get PDF
    This work is report on relevant sources about IR, and some references about the environment they came from. It gives an overview concerning causes, consequences and impact of IR application in the scholarly communication channel and it is trying to understand current trends in changing scholarly communication models through IR. It provides a critical overview about benefits, but also obstacles, problems and issues that need to be faced in developing IR and earns deeper understanding on the role librarians play in the implementation, management and advocacy of IRs

    Metadata Quality for Digital Libraries

    Get PDF
    The quality of metadata in a digital library is an important factor in ensuring access for end-users. Several studies have tried to define quality frameworks and assess metadata but there is little user feedback about these in the literature. As collections grow in size maintaining quality through manual methods becomes increasingly difficult for repository managers. This research presents the design and implementation of a web-based metadata analysis tool for digital repositories. The tool is built as an extension to the Greenstone3 digital library software. We present examples of the tool in use on real-world data and provide feedback from repository managers. The evidence from our studies shows that automated quality analysis tools are useful and valued service for digital libraries
    corecore