13 research outputs found

    Standard metapodataka za znanstvene podatke CERIF u implementaciji XML-a

    Get PDF
    Organizacija, očuvanje i umrežavanje podataka iz znanstvenih istraživanja u svim područjima znanstvenog djelovanja neophodan su dio znanstvenog rada. Sve je više svjetskih sveučilišta i znanstvenih projekata koji zahtijevaju da se, osim objavljenih znanstvenih rezultata u radovima, i drugi podaci vezani za istraživanja sačuvaju te učine dostupnima. Ne postoji svjetski konsenzus o standardnom načinu kojim bi se svi ti raznoliki podaci oblikovali i učinili dostupnima. U posljednjih nekoliko godina CERIF (Common European Research Information Format) je iznjedrio kao najprihvaćeni standard, barem u europskom kontekstu. Implementacija koncepcijskog modela jest u XML-u. U radu je opisan sam teorijski model, njegove komponente te su dani primjeri u XML-u. Moguće implikacije za budućnost ukratko su raspravljene

    The Minimum Mandatory Metadata Sets for the KIM Project and RAIDmap

    Get PDF
    A Minimum Mandatory Metadata Set (M3S) was devised for the KIM (Knowledge and Information Management Through Life) Project to address two challenges. The first was to ensure the project’s documents were sufficiently self-documented to allow them to be preserved in the long term. The second was to trial the M3S and supporting templates and tools as a possible approach that might be used by the aerospace, defence and construction industries. A different M3S was devised along similar principles by a later project called REDm-MED (Research Data Management for Mechanical Engineering Departments). The aim this time was to help specify a tool for documenting research data records and the associations between them, in support of both preservation and discovery. In both cases the emphasis was on collecting a minimal set of metadata at the time of object creation, on the understanding that later processes would be able to expand the set into a full metadata record

    The Minimum Mandatory Metadata Sets for the KIM Project and RAIDmap

    Get PDF
    A Minimum Mandatory Metadata Set (M3S) was devised for the KIM (Knowledge and Information Management Through Life) Project to address two challenges. The first was to ensure the project's documents were sufficiently self-documented to allow them to be preserved in the long term. The second was to trial the M3S and supporting templates and tools as a possible approach that might be used by the aerospace, defence and construction industries. A different M3S was devised along similar principles by a later project called REDm-MED (Research Data Management for Mechanical Engineering Departments). The aim this time was to help specify a tool for documenting research data records and the associations between them, in support of both preservation and discovery. In both cases the emphasis was on collecting a minimal set of metadata at the time of object creation, on the understanding that later processes would be able to expand the set into a full metadata record. </jats:p

    Information systems challenges for through-life engineering

    Get PDF
    AbstractInformation technologies hold great promise in achieving reduction in through-life support costs for long-lived complex artefacts such as aircraft and ships, and may allow very much improved assessment of asset condition, but in order for these to be achieved a number of technical and socio-technical challenges have to be overcome. Based on a perspective gained in the EPSRC Knowledge and Information Management Through-Life Grand Challenge project this paper gives an over view of these challenges, of recent research achievement and of areas where further research is needed. In particular, it notes that it is important to identify what information needs to be captured through the life of the artefact and how the information may be organised and sustained over long timescales. Important standards are reviewed, as are emerging developments such as classification systems and ontologies for organisation and the use of lightweight representations and annotation. Finally, socio-technical challenges including data accuracy and quality issues, security and privacy and the latency in multi-faceted information systems are reviewed

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    Creative connections: the value of digital information and its effective management for sustainable contemporary visual art practice

    Get PDF
    This study examines digital information use in contemporary visual art making in the UK using a practitioner-centred approach. This research employs innovative approaches to establish new understanding of visual artists’ tasks and skills in contemporary practice. Based on evidence derived from a substantial series of detailed qualitative case-study interviews, the research particularly clarifies the value of digital information use in contemporary visual art practice in the UK and the implications of the current digital information management skills base in the UK visual art community. This study deploys Bourdieu’s theory of cultural production to test its analytical power in the setting of contemporary visual art practice in the UK, interlinked with Becker’s art world theory which conceptualises art as a group activity. Bourdieu’s field theory was particularly mobilised as a tool to analyse artists’ endeavours, whilst understanding those endeavours as a result of interaction between a network of individuals and organisations. These approaches were coupled with a practitioner-focused, qualitative methodology to produce deep understanding of how artists spend their time, how they value particular resources in making their work, and the relationship between the two. I explore and specify how artists search for, retrieve, manage, use and circulate digital information, described in artists’ own terms, and how they understand and value digital information and digital objects in their practices and careers. Particular attention is given to artists’ tasks that require digital technologies, the skills that are needed by the artist to perform these tasks, the extent to which these tasks and skills are considered valuable by the artist, whether the artist feels confident in their ability to perform these tasks competently and effectively, and the extent to which they rely on their social and professional networks, as foregrounded by art world theory, to ameliorate skills gaps. The study identified that artists vary their habitus to contribute labour to different Bourdieusian fields, particularly: a) as a private individual, b) as an artist working outside their practice, and c) as artist-within-practice. Further findings include the critical value of digital technologies and digital objects to the workflows of contemporary artists in a range of ways across these fields. This research also shows that much of the work in contemporary professional art making can be understood as invisible labour, whilst the skills around effective use of critical digital technologies can be understood as similarly invisible to this professional population. Taken together, the study findings provide an evidence base for the use of policy makers when designing funding activities or programmes in the visual arts sector. Findings also support important suggestions for providers of education and training in the visual arts, with profound implications for the fit-to-need of current curricula in tertiary and professional art education. Finally, this study analyses and clarifies the extent to which the information sciences are reaching this profession, and how the professional art community may benefit from engagement with information science concepts and practices as a tool in the struggle to stay in practice

    Research Data Curation and Management Bibliography

    Get PDF
    This e-book includes over 800 selected English-language articles and books that are useful in understanding the curation of digital research data in academic and other research institutions. It covers topics such as research data creation, acquisition, metadata, provenance, repositories, management, policies, support services, funding agency requirements, open access, peer review, publication, citation, sharing, reuse, and preservation. It has live links to included works. Abstracts are included in this bibliography if a work is under certain Creative Commons Licenses. This book is licensed under a Creative Commons Attribution 4.0 International License. Cite as: Bailey, Charles W., Jr. Research Data Curation and Management Bibliography. Houston: Digital Scholarship, 2021

    Visualizing Research Data Records for their Better Management

    No full text
    As academia in general, and research funders in particular, place ever greater importance on data as an output of research, so the value of good research data management practices becomes ever more apparent. In response to this, the Innovative Design and Manufacturing Research Centre (IdMRC) at the University of Bath, UK, with funding from the JISC, ran a project to draw up a data management planning regime. In carrying out this task, the ERIM (Engineering Research Information Management) Project devised a visual method of mapping out the data records produced in the course of research, along with the associations between them. This method, called Research Activity Information Development (RAID) Modelling, is based on the Unified Modelling Language (UML) for portability. It is offered to the wider research community as an intuitive way for researchers both to keep track of their own data and to communicate this understanding to others who may wish to validate the findings or re-use the data
    corecore