28,500 research outputs found

    Systemwide Clinical Ultrasound Program Development: An Expert Consensus Model.

    Get PDF
    Clinical ultrasound (CUS) is integral to the practice of an increasing number of medical specialties. Guidelines are needed to ensure effective CUS utilization across health systems. Such guidelines should address all aspects of CUS within a hospital or health system. These include leadership, training, competency, credentialing, quality assurance and improvement, documentation, archiving, workflow, equipment, and infrastructure issues relating to communication and information technology. To meet this need, a group of CUS subject matter experts, who have been involved in institution- and/or systemwide clinical ultrasound (SWCUS) program development convened. The purpose of this paper was to create a model for SWCUS development and implementation

    To share or not to share: Publication and quality assurance of research data outputs. A report commissioned by the Research Information Network

    No full text
    A study on current practices with respect to data creation, use, sharing and publication in eight research disciplines (systems biology, genomics, astronomy, chemical crystallography, rural economy and land use, classics, climate science and social and public health science). The study looked at data creation and care, motivations for sharing data, discovery, access and usability of datasets and quality assurance of data in each discipline

    Artefacts and Errors: Acknowledging Issues of Representation in the Digital: Imaging of Ancient Texts

    Get PDF
    It is assumed, in palaeography, papyrology and epigraphy, that a certain amount of uncertainty is inherent in the reading of damaged and abraded texts. Yet we have not really grappled with the fact that, nowadays, as many scholars tend to deal with digital images of texts, rather than handling the texts themselves, the procedures for creating digital images of texts can insert further uncertainty into the representation of the text created. Technical distortions can lead to the unintentional introduction of ‘artefacts’ into images, which can have an effect on the resulting representation. If we cannot trust our digital surrogates of texts, can we trust the readings from them? How do scholars acknowledge the quality of digitised images of texts? Furthermore, this leads us to the type of discussions of representation that have been present in Classical texts since Plato: digitisation can be considered as an alternative form of representation, bringing to the modern debate of the use of digital technology in Classics the familiar theories of mimesis (imitation) and ekphrasis (description): the conversion of visual evidence into explicit descriptions of that information, stored in computer files in distinct linguistic terms, with all the difficulties of conversion understood in the ekphratic process. The community has not yet considered what becoming dependent on digital texts means for the field, both in practical and theoretical terms. Issues of quality, copying, representation, and substance should be part of our dialogue when we consult digital surrogates of documentary material, yet we are just constructing understandings of what it means to rely on virtual representations of artefacts. It is necessary to relate our understandings of uncertainty in palaeography and epigraphy to our understanding of the mechanics of visualization employed by digital imaging techniques, if we are to fully understand the impact that these will have

    The Wiltshire Wills Feasibility Study

    Get PDF
    The Wiltshire and Swindon Record Office has nearly ninety thousand wills in its care. These records are neither adequately catalogued nor secured against loss by facsimile microfilm copies. With support from the Heritage Lottery Fund the Record Office has begun to produce suitable finding aids for the material. Beginning with this feasibility study the Record Office is developing a strategy to ensure the that facsimiles to protect the collection against risk of loss or damage and to improve public access are created.<p></p> This feasibility study explores the different methodologies that can be used to assist the preservation and conservation of the collection and improve public access to it. The study aims to produce a strategy that will enable the Record Office to create digital facsimiles of the Wills in its care for access purposes and to also create preservation quality microfilms. The strategy aims to seek the most cost effective and time efficient approach to the problem and identifies ways to optimise the processes by drawing on the experience of other similar projects. This report provides a set of guidelines and recommendations to ensure the best use of the resources available for to provide the most robust preservation strategy and to ensure that future access to the Wills as an information resource can be flexible, both local and remote, and sustainable

    Assessing the wider benefits arising from university-based research

    Get PDF
    In November 2012, the Government announced its intention to assess the broader economic, social and environmental benefits arising from all elements of government research investment, including those benefits arising from university-based research. In support of this work, the Department of Industry, Innovation, Climate Change, Science, Research and Tertiary Education, in consultation with the Australian Research Council, has prepared a discussion paper on Assessing the wider benefits arising from university-based research.The discussion paper is now available at www.innovation.gov.au/impactThe department invites responses to the issues raised in the paper.A preferred template for responses is also available online.Submissions are due by 16 August 2013. Scope of this paper: Research  is  undertaken  across  many  sectors  including  industry,  universities,  research  institutes  and   publicly  funded  research  agencies.  This  paper  relates  specifically  to  research  undertaken  in   universities  because  of  the  volume  of  data  that  is  already  collected  from  these  institutions  and  the   opportunity  to  build  on  the  assessment  of  research  quality  through  the  Excellence  in  Research  for   Australia  (ERA)  initiative.  The  research  and  engagement  activities  undertaken  within  publicly  funded   research  agencies,  independent  research  institutes  and  within  business  outside  of  universities  are   out  of  scope  for  this  consultation. &nbsp

    The LIFE2 final project report

    Get PDF
    Executive summary: The first phase of LIFE (Lifecycle Information For E-Literature) made a major contribution to understanding the long-term costs of digital preservation; an essential step in helping institutions plan for the future. The LIFE work models the digital lifecycle and calculates the costs of preserving digital information for future years. Organisations can apply this process in order to understand costs and plan effectively for the preservation of their digital collections The second phase of the LIFE Project, LIFE2, has refined the LIFE Model adding three new exemplar Case Studies to further build upon LIFE1. LIFE2 is an 18-month JISC-funded project between UCL (University College London) and The British Library (BL), supported by the LIBER Access and Preservation Divisions. LIFE2 began in March 2007, and completed in August 2008. The LIFE approach has been validated by a full independent economic review and has successfully produced an updated lifecycle costing model (LIFE Model v2) and digital preservation costing model (GPM v1.1). The LIFE Model has been tested with three further Case Studies including institutional repositories (SHERPA-LEAP), digital preservation services (SHERPA DP) and a comparison of analogue and digital collections (British Library Newspapers). These Case Studies were useful for scenario building and have fed back into both the LIFE Model and the LIFE Methodology. The experiences of implementing the Case Studies indicated that enhancements made to the LIFE Methodology, Model and associated tools have simplified the costing process. Mapping a specific lifecycle to the LIFE Model isn’t always a straightforward process. The revised and more detailed Model has reduced ambiguity. The costing templates, which were refined throughout the process of developing the Case Studies, ensure clear articulation of both working and cost figures, and facilitate comparative analysis between different lifecycles. The LIFE work has been successfully disseminated throughout the digital preservation and HE communities. Early adopters of the work include the Royal Danish Library, State Archives and the State and University Library, Denmark as well as the LIFE2 Project partners. Furthermore, interest in the LIFE work has not been limited to these sectors, with interest in LIFE expressed by local government, records offices, and private industry. LIFE has also provided input into the LC-JISC Blue Ribbon Task Force on the Economic Sustainability of Digital Preservation. Moving forward our ability to cost the digital preservation lifecycle will require further investment in costing tools and models. Developments in estimative models will be needed to support planning activities, both at a collection management level and at a later preservation planning level once a collection has been acquired. In order to support these developments a greater volume of raw cost data will be required to inform and test new cost models. This volume of data cannot be supported via the Case Study approach, and the LIFE team would suggest that a software tool would provide the volume of costing data necessary to provide a truly accurate predictive model

    Courtauld Institute of Art

    Get PDF

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods
    corecore