10 research outputs found

    Evaluation Criteria for Frameworks in eHealth Domain

    Get PDF
    Framework articles are commonly used to synthesise research literature on a topic area, and provide a thorough description and evaluation of the work done, setting directions for future research. There is a need for criteria that can both guide authors to develop comprehensive frameworks, and for reviewers to evaluate these articles, especially in complex areas such as E-Health. By assessing a representative sample of journals and databases most likely to publish E-Health framework articles, we present a set of criteria for the evaluation of framework articles and identify the most salient features for this type of publications. Our findings suggest that a “good” framework article should aid researchers in understanding the research area, have clearly defined boundary, consist of a parsimonious set of elements and have clear guidelines on what to expect for a problem within that framework. We also found that framework articles in the E-Health domain can be characterised according to their objective, comprehensiveness, relationship with the boundary of the research stream, temporal nature, elements examined and substantive output. This paper describes how we arrive at the criteria for evaluating EHealth frameworks, and illustrates how we can apply them on a specific framework

    Metric analysis of the information visibility and diffusion about the European Higher Education Area on Spanish university websites.

    Get PDF
    The purpose of the study proposed in this paper is to evaluate the Spanish public university websites dedicated to the European Higher Education Area (EHEA). To do so, the quality of these resources has been analysed in the light of data provided by a series of indicators grouped in seven criteria, most of which were used to determine what information is made available and in what way. The criteria used in our analysis are: visibility, authority, updatedness, accesibility, correctness and completeness, quality assessment and navigability. All in all, the results allow us to carry out an overall diagnosis of the situation and also provide us with information about the situation at each university, thus revealing their main strengths, namely authority and navegability, and also their chief shortcomings: updatedness, accessibility and quality assessment. In this way it is possible to detect the best practices in each of the aspects evaluated so that they can serve as an example and guide for universities with greater deficiencies and thus help them to improve their EHEA websites

    Metric Analysis of the Information visibility and diffusion about the European Higher Education Area on Spanish University Websites

    Get PDF
    pp. 345-370The purpose of the study proposed in this paper is to evaluate the Spanish public university websites dedicated to the European Higher Education Area (EHEA). To do so, the quality of these resources has been analysed in the light of data provided by a series of indicators grouped in seven criteria, most of which were used to determine what information is made available and in what way. The criteria used in our analysis are: visibility, authority, updatedness, accesibility, correctness and completeness, quality assessment and navigability. All in all, the results allow us to carry out an overall diagnosis of the situation and also provide us with information about the situation at each university, thus revealing their main strengths, namely authority and navegability, and also their chief shortcomings: updatedness, accessibility and quality assessment. In this way it is possible to detect the best practices in each of the aspects evaluated so that they can serve as an example and guide for universities with greater deficiencies and thus help them to improve their EHEA websites.S

    Sección Bibliográfica

    Get PDF

    Towards a Weighted Average Framework For Evaluating the Quality of Web-Located Health Information

    Get PDF
    This paper proposes a framework for evaluating the quality of Web-located health information. A set of affirmative-response evaluation features are identified across four quality categories- currency/authority, accuracy, objectivity and privacy- and are used as the basis for determining the fundamental quality of Web-located health information. Furthermore, the researchers add a value dimension to the framework by using a weighted average technique allowing information features to be scored proportionally- a feature that other assessment frameworks tend to overlook. The framework was used to test 56 health information documents published on the Web, concluding that only four pages addressed all the core criteria proposed in the framework. The study also found that a relatively high number of commercial health sites intermixed health information with product promotion and advertising. The study was exploratory and because sampling was not probalistic, it is difficult to claim generalisability at this stage. However, some notable results identified in this study may serve as the foundations for future researc

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    TV in the Age of the Internet: Information Quality of Science Fiction TV Fansites

    Get PDF
    Thesis (Ph.D.) - Indiana University, Information Science, 2011Communally created Web 2.0 content on the Internet has begun to compete with information provided by traditional gatekeeper institutions, such as academic journals, medical professionals, and large corporations. On the one hand, such gatekeepers need to understand the nature of this competition, as well as to try to ensure that the general public are not endangered by poor quality information. On the other hand, advocates of free and universal access to basic social services have argued that communal efforts can provide as good or better-quality versions of commonly needed resources. This dissertation arises from these needs to understand the nature and quality of information being produced on such websites. Website-oriented information quality (IQ) literature spans at least 15 different academic fields, a survey of which identified two types of IQ: perceptual and artifactual fitness-related, and representational accuracy and completeness-related. The current project studied websites in terms of all of these, except perceptual fitness. This study may be the only of its kind to have targeted fansites: websites made by fans of a mass media franchise. Despite the Internet's becoming a primary means by which millions of people consume and co-produce their entertainment, little academic attention has been paid to the IQ of sites about the mass media. For this study, the four central non-studio-affiliated sites about a highly popular and fan-engaging science fiction television franchise, Stargate, were chosen, and their IQ examined across sites having different sizes as well as editorial and business models. As exhaustive of samples as possible were collected from each site. Based on 21 relevant variables from the IQ literature, four qualitative and 17 exploratory statistical analyses were conducted. Key findings include: five possibly new IQ criteria; smaller sites concerned more with pleasing connoisseuring fans than the general public; larger sites being targeted towards older users; professional editors serving their own interests more than users'; wikis' greater user freedom attracting more invested and balanced writers; for-profit sites being more imposing upon, and less protecting of, users than non-profit sites; and the emergence of common writing styles, themes, data fields, advertisement types, linking strategies, and page types

    Racist disinformation on the Web: the role of anti-racist sites in providing balance

    Get PDF
    This thesis examines the problem of racist disinformation on the World Wide Web and the role played by anti-racist sites in providing balance. The disinformation capacity of the Web is an important issue for those who provide access to the Web, for content providers, and for Web users. An understanding of the issues involved, including the characteristics of racist disinformation, is vital if these groups are to make informed decisions about how to deal with such Web content. However, in Australia especially, there has been limited research into racism in general and racism on the Web in particular. To address this deficiency, the integration of perspectives from the fields of race relations and information science is facilitated utilising a critical realist methodology to provide new insights. Through an extensive examination of the literature, including Australian media reports, terms are delineated and the problem situated within an historical, cultural and political environment. Alternatives for tackling racist disinformation are evaluated and the issues involved in the provision and utilisation of balancing information are discussed. The literature analysis underpins an assessment of anti-racist sites using three data collection methods to gain multiple perspectives on the balancing qualities of these sites. These methods are an assessment of anti-racist website longevity, an assessment of website reliability, and a questionnaire of content providers of anti-racist websites. This thesis provides a synthesis of the academic literature and media coverage related to Australian racism and racist disinformation on the Web, leading to new insights about the range and depth of issues concerned. An analysis of the data collected concludes that while anti-racist websites take on diverse roles in tackling racism, few provide content directly to balance Web racist disinformation. Approaches that seek to control or censure the Web are ineffective and problematic, but balancing disinformation is not in itself an adequate solution
    corecore