7 research outputs found

    Digital Preservation, Archival Science and Methodological Foundations for Digital Libraries

    Get PDF
    Digital libraries, whether commercial, public or personal, lie at the heart of the information society. Yet, research into their long‐term viability and the meaningful accessibility of their contents remains in its infancy. In general, as we have pointed out elsewhere, ‘after more than twenty years of research in digital curation and preservation the actual theories, methods and technologies that can either foster or ensure digital longevity remain startlingly limited.’ Research led by DigitalPreservationEurope (DPE) and the Digital Preservation Cluster of DELOS has allowed us to refine the key research challenges – theoretical, methodological and technological – that need attention by researchers in digital libraries during the coming five to ten years, if we are to ensure that the materials held in our emerging digital libraries are to remain sustainable, authentic, accessible and understandable over time. Building on this work and taking the theoretical framework of archival science as bedrock, this paper investigates digital preservation and its foundational role if digital libraries are to have long‐term viability at the centre of the global information society.

    Handling Failures in Data Quality Measures

    Get PDF
    Successful data quality (DQ) measure is importantfor many data consumers (or data guardians) to decide on theacceptability of data of concerned. Nevertheless, little is knownabout how “failures” of DQ measures can be handled by dataguardians in the presence of factor(s) that contributes to thefailures. This paper presents a review of failure handling mechanismsfor DQ measures. The failure factors faced by existing DQmeasures will be presented, together with the research gaps inrespect to failure handling mechanisms in DQ frameworks. Inparticular, by comparing existing DQ frameworks in terms of: theinputs used to measure DQ, the way DQ scores are computed andthey way DQ scores are stored, we identified failure factorsinherent within the frameworks. Understanding of how failurescan be handled will lead to the design of a systematic failurehandling mechanism for robust DQ measures

    Handling Failures in Data Quality Measures

    Get PDF
    Successful data quality (DQ) measure is important for many data consumers (or data guardians) to decide on the acceptability of data of concerned. Nevertheless, little is known about how “failures” of DQ measures can be handled by data guardians in the presence of factor(s) that contributes to the failures. This paper presents a review of failure handling mechanisms for DQ measures. The failure factors faced by existing DQ measures will be presented, together with the research gaps in respect to failure handling mechanisms in DQ frameworks. We propose ways to maximise the situations in which data quality scores can be produced when factors that would cause the failure of currently proposed scoring mechanisms are present. By understanding how failures can be handled, a systematic failure handling mechanism for robust DQ measures can be designed

    Model-Driven Component Generation for Families of Completeness Measures

    Get PDF
    Completeness is a well-understood dimension of data quality. In particular, measures of coverage can be used to assess the completeness of a data source, relative to some universe, for instance a collection of reference databases. We observe that this definition is inherently and implicitly multidimensional: in principle, one can compute measures of coverage that are expressed as a combination of subset of the attributes in the data source schema. This generalization can be useful in several application domains, notably in the life sciences. This leads to the idea of domain-specic families of completeness measures that users can choose from. Furthermore, individuals in the family can be specified as OLAP-type queries on a dimensional schema. In this paper we describe an initial data architecture to support and validate the idea, and show how dimensional completeness measures can be supported in practice by extending the Quality View model [11]

    Incorporating Domain-Specific Information Quality Constraints into Database Queries

    Get PDF
    The range of information now available in queryable repositories opens up a host of possibilities for new and valuable forms of data analysis. Database query languages such as SQL and XQuery offer a concise and high-level means by which such analyses can be implemented, facilitating the extraction of relevant data subsets into either generic or bespoke data analysis environments. Unfortunately, the quality of data in these repositories is often highly variable. The data is still useful, but only if the consumer is aware of the data quality problems and can work around them. Standard query languages offer little support for this aspect of data management. In principle, however, it should be possible to embed constraints describing the consumer’s data quality requirements into the query directly, so that the query evaluator can take over responsibility for enforcing them during query processing. Most previous attempts to incorporate information quality constraints into database queries have been based around a small number of highly generic quality measures, which are defined and computed by the information provider. This is a useful approach in some application areas but, in practice, quality criteria are more commonly determined by the user of the information not by the provider. In this paper, we explore an approach to incorporating quality constraints into databas

    Making quality count in biological data sources

    No full text
    corecore