6 research outputs found
Metadata quality issues in learning repositories
Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods
Metadata quality issues in learning repositories
Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods
Smart library model based on big data technologies
Предмет истраживања
докторске дисертације је развој модела памет не
библиотеке заснованог на big data технологијама и сервисима. Централни
истраживачки проблем разматран у раду је развој big data инфраструктуре и
сервиса паметне библиотеке који омогућавају интелигентну претрагу и
препоруку библиотечког садржаја. Посебан циљ рада је да испита могућност
интеграције развијеног модела са паметним образовним окружењима у циљу
унапређе ња квалитета образовног процеса.
У докторској дисертацији је представљен модел паметне библиотеке као интегралног дела образовног система који може да побољша квалитет и свеобухватност наставних ресурса и повећа мотивацију у процесу учења препоручивањем садржаја од интереса. Модел описан у раду омогућава примену big data система за анализу, обраду и визуализацију података прикупљених из различитих извора и обухвата њихову интеграцију у паметну библиотеку. Циљ развоја паметних библиотека је да се унапреде библиотечки пословни процеси и да се корисницима пруже иновативни сервиси за претрагу и коришћење садржаја.
У дисертацији се разматрају различите перспективе имплементације big data решења за паметне библиотеке као део континуираног образовног процеса, са посебним фокусом на интеграцију традиционалних система и big data технологија. Поред наведених компонената система, модел обухвата инфраструктуру и интеграцију система препоруке колаборативног филтрирања извора различитих података са big data технологијама.
Модел је евалуиран кроз тестирање и мерење релевантних параметара перформанси који утичу на ефикасност предложеног модела.The subject of this doctoral dissertation research is the development of a smart library
model based on big data technologies and services . The central research problem
discussed in the thesis is the development of big data infrastr ucture and smart library
services that enable intelligent searches and recommendations from the library content.
A particular focus of the paper is an examination of the possibility of integrating the
developed model into a smart educational environment in order to improve the quality
of the educational process.
The thesis presents a model of the smart library as an integral part of the educational
system that would improve quality level and comprehesivness of learning resources and
increase the motivation of its users through content aware recommendations. The model
described in the thesis considers the possibilities of applying a big data system for the
collection, analysis, processing and visualization of data from multiple sources, and the
integration of data into the smart library . The goal of developing a smart library is to
improve the library’s business process and to offer users innovative metho ds to search
and content use.
The thesis discusses the perspective of the implementation of a big data solu
tion for
smart libraries as a part of a continuous learning process with the aim of improving the
results of library operations by integrating traditional systems with big data technology.
In addition to the above system components, the model includes the infrastructure and
integration of a recommender system for collaborative filtering by incorporating
multiple sources of differential data with big data technologies.
Within the evaluation of the model, testing and measurement of the relevant
performance p arameters which influence the efficiency of the proposed model were
carried out
Development of a medical digital library managing multiple collections
Purpose - Aims to present the authors' efforts towards the development of a digital library environment supporting research at the Medical School of Athens University, Greece. Design/methodology/approach - The digital library facilitates access to medical material produced by laboratories for both research and educational purposes. As the material produced varies (regarding its type and structure) and the search requirements imposed by potential users differ, each laboratory develops its own collection. All collections must be bilingual, supporting both Greek and English. Extended requirements were imposed regarding the services offered by the digital library environment, due to the following reasons: end-users actively participate in the cataloguing workflow; cataloguers should be able to create and manage multiple collections in a simplified manner; and different search requirements must be supported for different user groups. To formulate and then deal with these requirements, the authors introduced the term "dynamic collection management" denoting automated collection definition and unified collection management within an integrated digital library environment. Digital library components providing the desired functionality and the interaction between them are described. System performance, especially during collection search, and bilingual support are also explored. Findings - Finds that Athens Medical School Digital Library facilitates access to medical material to researchers and students for both research and educational purposes. Originality/value - The paper provides useful information on a digital library environment which supports research. © Emerald Group Publishing Limited