33,772 research outputs found

    Constructing experimental indicators for Open Access documents

    Get PDF
    The ongoing paradigm change in the scholarly publication system ('science is turning to e-science') makes it necessary to construct alternative evaluation criteria/metrics which appropriately take into account the unique characteristics of electronic publications and other research output in digital formats. Today, major parts of scholarly Open Access (OA) publications and the self-archiving area are not well covered in the traditional citation and indexing databases. The growing share and importance of freely accessible research output demands new approaches/metrics for measuring and for evaluating of these new types of scientific publications. In this paper we propose a simple quantitative method which establishes indicators by measuring the access/download pattern of OA documents and other web entities of a single web server. The experimental indicators (search engine, backlink and direct access indicator) are constructed based on standard local web usage data. This new type of web-based indicator is developed to model the specific demand for better study/evaluation of the accessibility, visibility and interlinking of open accessible documents. We conclude that e-science will need new stable e-indicators.Comment: 9 pages, 3 figure

    PERFORMANCE EVALUATION ON QUALITY OF ASIAN AIRLINES WEBSITES – AN AHP PPROACH

    Get PDF
    In recent years, many people have devoted their efforts to the issue of quality of Web site. The concept of quality is consisting of many criteria: quality of service perspective, a user perspective, a content perspective or indeed a usability perspective. Because of its possible instant worldwide audience a Website’s quality and reliability are crucial. The very special nature of the web applications and websites pose unique software testing challenges. Webmasters, Web applications developers, and Website quality assurance managers need tools and methods that can match up to the new needs. This research conducts some tests to measure the quality web site of Asian flag carrier airlines via web diagnostic tools online. We propose a methodology for determining and evaluate the best airlines websites based on many criteria of website quality. The approach has been implemented using Analytical Hierarchy Process (AHP) to generate the weights for the criteria which are much better and guarantee more fairly preference of criteria. The proposed model uses the AHP pairwise comparisons and the measure scale to generate the weights for the criteria which are much better and guarantee more fairly preference of criteria. The result of this study confirmed that the airlines websites of Asian are neglecting performance and quality criteria

    Measuring and comparing the reliability of the structured walkthrough evaluation method with novices and experts

    Get PDF
    Effective evaluation of websites for accessibility remains problematic. Automated evaluation tools still require a significant manual element. There is also a significant expertise and evaluator effect. The Structured Walkthrough method is the translation of a manual, expert accessibility evaluation process adapted for use by novices. The method is embedded in the Accessibility Evaluation Assistant (AEA), a web accessibility knowledge management tool. Previous trials examined the pedagogical potential of the tool when incorporated into an undergraduate computing curriculum. The results of the evaluations carried out by novices yielded promising, consistent levels of validity and reliability. This paper presents the results of an empirical study that compares the reliability of accessibility evaluations produced by two groups (novices and experts). The main results of this study indicate that overall reliability of expert evaluations was 76% compared to 65% for evaluations produced by novices. The potential of the Structured Walkthrough method as a useful and viable tool for expert evaluators is also examined. Copyright 2014 ACM

    Utilising content marketing metrics and social networks for academic visibility

    Get PDF
    There are numerous assumptions on research evaluation in terms of quality and relevance of academic contributions. Researchers are becoming increasingly acquainted with bibliometric indicators, including; citation analysis, impact factor, h-index, webometrics and academic social networking sites. In this light, this chapter presents a review of these concepts as it considers relevant theoretical underpinnings that are related to the content marketing of scholars. Therefore, this contribution critically evaluates previous papers that revolve on the subject of academic reputation as it deliberates on the individual researchers’ personal branding. It also explains how metrics are currently being used to rank the academic standing of journals as well as higher educational institutions. In a nutshell, this chapter implies that the scholarly impact depends on a number of factors including accessibility of publications, peer review of academic work as well as social networking among scholars.peer-reviewe

    Measuring internet activity: a (selective) review of methods and metrics

    Get PDF
    Two Decades after the birth of the World Wide Web, more than two billion people around the world are Internet users. The digital landscape is littered with hints that the affordances of digital communications are being leveraged to transform life in profound and important ways. The reach and influence of digitally mediated activity grow by the day and touch upon all aspects of life, from health, education, and commerce to religion and governance. This trend demands that we seek answers to the biggest questions about how digitally mediated communication changes society and the role of different policies in helping or hindering the beneficial aspects of these changes. Yet despite the profusion of data the digital age has brought upon us—we now have access to a flood of information about the movements, relationships, purchasing decisions, interests, and intimate thoughts of people around the world—the distance between the great questions of the digital age and our understanding of the impact of digital communications on society remains large. A number of ongoing policy questions have emerged that beg for better empirical data and analyses upon which to base wider and more insightful perspectives on the mechanics of social, economic, and political life online. This paper seeks to describe the conceptual and practical impediments to measuring and understanding digital activity and highlights a sample of the many efforts to fill the gap between our incomplete understanding of digital life and the formidable policy questions related to developing a vibrant and healthy Internet that serves the public interest and contributes to human wellbeing. Our primary focus is on efforts to measure Internet activity, as we believe obtaining robust, accurate data is a necessary and valuable first step that will lead us closer to answering the vitally important questions of the digital realm. Even this step is challenging: the Internet is difficult to measure and monitor, and there is no simple aggregate measure of Internet activity—no GDP, no HDI. In the following section we present a framework for assessing efforts to document digital activity. The next three sections offer a summary and description of many of the ongoing projects that document digital activity, with two final sections devoted to discussion and conclusions

    What are the Best Processes for Using Metrics to Ensure Organizational Optimization Needs of our HR Clients are Being Met?

    Get PDF
    A major insurance company currently utilizes HR partners to serve needs within the organization. A challenge of this is determining how to drive organizational optimization and measuring how effective HR initiatives are in accomplishing this goal

    What Should be in Place to Assess the Effectiveness or Return on Investment of a Company\u27s Leadership Development Programs?

    Get PDF
    [Excerpt] Leadership is vital to a company’s bottom line, yet only 41% of C-suite leaders believe that their organizations’ leadership development programs (LDP) are of high or very high quality. However, only 18% of companies are gathering relevant business impact metrics, key determinants for measuring a program’s effectiveness and ROI. Many organizations focus on the Kirkpatrick model--reaction, learning, behavior, and results--to evaluate learning, it is critical to extend this framework to include return-on-investment. This focus on operational and strategic metrics that will drive results for the business and individual to accurately measure LDPs spanning the entry and executive levels to focus on relevant indicators

    The challenge of the e-Agora metrics: the social construction of meaningful measurements

    Get PDF
    'How are we progressing towards achieving sustainable development in the EU's desired knowledge society?' Current lists of indicators, indices and assessment tools, which have been developed for measuring and displaying performance at different spatial levels, show that progress has been made. However, there are still a very large number of indicators, perhaps the majority, most specifically those which relate to social and political issues, that are difficult to capture. Issues such as intergenerational equity, aesthetics and governance come into this category. 'How is it possible to measure these and capture their full meaning and represent this back meaningfully to disparate groups of stakeholders in a society?' This paper will discuss these issues, highlighting the need for new methods and an alternative view of how to go about the capture and representation of the types of data with which we need to wor

    Can Inclusion be Measured in a Quantitative Way, Just Qualitative, or a Combination?

    Get PDF
    [Excerpt] Deloitte’s 2017 Global Human Capital Trends study revealing that the number of executives who cited inclusion as a top priority rose 32% since 2014. For this study inclusion is defined as the degree to which an employee perceives that he or she is a valued member of the work group. It’s important to discern that inclusion is not autonomous from belonging, but that both are key elements in company initiatives. Belonging from the employee point of view is, “I can be authentic, I matter, and am essential to my team.” Workgroup diversity is a well-researched topic, but diversity is taken a step further with inclusion. Companies have are starting to reconsider their practices to measure it. Forbes names 2016 as the year of Diversity and Inclusion. Deloitte Australia research shows that inclusive teams outperform peers by 80% in team-based assessments. Deloitte also confirms that companies that embrace diversity and inclusion in all aspects of their business outperform their peers. However, companies acknowledge that, inclusion and belonging are complex constructs to define and measure. With increasing investments, measuring inclusion is vital to understanding if employees feel a sense of inclusion and belonging within their company
    corecore