1,616 research outputs found

    Online sharing educational content on biodiversity topics: a case study from organic agriculture and agroecology

    Get PDF
    E-Learning Technologies and Standards are emerging as the dominant way to make educational content widely available. Approaches to these technologies should be domain-independent and easily adaptable to different contexts. Organic.Edunet aims at making content on Organic Agriculture and Agroecology widely available through a single point of reference. To achieve this, the project has adopted and adapted Open Software solutions and has built upon them to offer the Organic.Edunet Web Federation Portal and the Repository Suite of Tools. This paper presents the tools that were developed in the frame of Organic.Edunet project, serving as a guide for all individuals that aim at establishing similar tools in a field such as the biodiversity

    D1.1 Analysis Report on Federated Infrastructure and Application Profile

    Get PDF
    Kawese, R., Fisichella, M., Deng, F., Friedrich, M., Niemann, K., Börner, D., Holtkamp, P., Hun-Ha, K., Maxwell, K., Parodi, E., Pawlowski, J., Pirkkalainen, H., Rodrigo, C., & Schwertel, U. (2010). D1.1 Analysis Report on Federated Infrastructure and Application Profile. OpenScout project deliverable.The present deliverable aims to report on functionalities of the first step of the described process. In other words, the deliverable describes how the consortium will gather the learning objects metadata, centralize the access to existing learning resources and form a suitable application profile which will contribute to a proper and suitable modeling, retrieval and presentation of the required information (regarding the learning objects) to the interested users. The described approach is the foundation for the federated, skill-based search and learning object retrieval. The deliverable focuses on reporting the analysis of the available repositories and the best infrastructure that can support OpenScout’s initiative. The deliverable explains the motivations behind the chosen infrastructure based on the study of available information and previous research and literature.The work on this publication has been sponsored by the OpenScout (Skill based scouting of open user-generated and community-improved content for management education and training) Targeted Project that is funded by the European Commission’s 7th Framework Programme. Contract ECP-2008-EDU-42801

    Development and governance of FAIR thresholds for a data federation

    Get PDF
    The FAIR (findable, accessible, interoperable, and re-usable) principles and practice recommendations provide high level guidance and recommendations that are not research-domain specific in nature. There remains a gap in practice at the data provider and domain scientist level demonstrating how the FAIR principles can be applied beyond a set of generalist guidelines to meet the needs of a specific domain community. We present our insights developing FAIR thresholds in a domain specific context for self-governance by a community (agricultural research). ‘Minimum thresholds’ for FAIR data are required to align expectations for data delivered from providers’ distributed data stores through a community-governed federation (the Agricultural Research Federation, AgReFed). Data providers were supported to make data holdings more FAIR. There was a range of different FAIR starting points, organisational goals, and end user needs, solutions, and capabilities. This informed the distilling of a set of FAIR criteria ranging from ‘Minimum thresholds’ to ‘Stretch targets’. These were operationalised through consensus into a framework for governance and implementation by the agricultural research domain community. Improving the FAIR maturity of data took resourcing and incentive to do so, highlighting the challenge for data federations to generate value whilst reducing costs of participation. Our experience showed a role for supporting collective advocacy, relationship brokering, tailored support, and low-bar tooling access particularly across the areas of data structure, access and semantics that were challenging to domain researchers. Active democratic participation supported by a governance framework like AgReFed’s will ensure participants have a say in how federations can deliver individual and collective benefits for members. © 2022 The Author(s)

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    How might technology rise to the challenge of data sharing in agri-food?

    Get PDF
    Acknowledgement This work was supported by an award made by the UKRI/EPSRC funded Internet of Food Things Network+ grant EP/R045127/1. We would also like to thank Mr Steve Brewer and Professor Simon Pearson for supporting the work presented in this paper.Peer reviewedPostprin

    Adopting E-training and Living Labs for Collaborative Learning for Rural Public Agencies

    Get PDF

    A Survey on Linked Data and the Social Web as facilitators for TEL recommender systems

    Get PDF
    Personalisation, adaptation and recommendation are central features of TEL environments. In this context, information retrieval techniques are applied as part of TEL recommender systems to filter and recommend learning resources or peer learners according to user preferences and requirements. However, the suitability and scope of possible recommendations is fundamentally dependent on the quality and quantity of available data, for instance, metadata about TEL resources as well as users. On the other hand, throughout the last years, the Linked Data (LD) movement has succeeded to provide a vast body of well-interlinked and publicly accessible Web data. This in particular includes Linked Data of explicit or implicit educational nature. The potential of LD to facilitate TEL recommender systems research and practice is discussed in this paper. In particular, an overview of most relevant LD sources and techniques is provided, together with a discussion of their potential for the TEL domain in general and TEL recommender systems in particular. Results from highly related European projects are presented and discussed together with an analysis of prevailing challenges and preliminary solutions.LinkedU
    corecore