2,378,443 research outputs found

    The politicisation of evaluation: constructing and contesting EU policy performance

    Get PDF
    Although systematic policy evaluation has been conducted for decades and has been growing strongly within the European Union (EU) institutions and in the member states, it remains largely underexplored in political science literatures. Extant work in political science and public policy typically focuses on elements such as agenda setting, policy shaping, decision making, or implementation rather than evaluation. Although individual pieces of research on evaluation in the EU have started to emerge, most often regarding policy “effectiveness” (one criterion among many in evaluation), a more structured approach is currently missing. This special issue aims to address this gap in political science by focusing on four key focal points: evaluation institutions (including rules and cultures), evaluation actors and interests (including competencies, power, roles and tasks), evaluation design (including research methods and theories, and their impact on policy design and legislation), and finally, evaluation purpose and use (including the relationships between discourse and scientific evidence, political attitudes and strategic use). The special issue considers how each of these elements contributes to an evolving governance system in the EU, where evaluation is playing an increasingly important role in decision making

    Designing citizen science tools for learning: lessons learnt from the iterative development of nQuire

    Get PDF
    This paper reports on a 4-year research and development case study about the design of citizen science tools for inquiry learning. It details the process of iterative pedagogy-led design and evaluation of the nQuire toolkit, a set of web-based and mobile tools scaffolding the creation of online citizen science investigations. The design involved an expert review of inquiry learning and citizen science, combined with user experience studies involving more than 200 users. These have informed a concept that we have termed ‘citizen inquiry’, which engages members of the public alongside scientists in setting up, running, managing or contributing to citizen science projects with a main aim of learning about the scientific method through doing science by interaction with others. A design-based research (DBR) methodology was adopted for the iterative design and evaluation of citizen science tools. DBR was focused on the refinement of a central concept, ‘citizen inquiry’, by exploring how it can be instantiated in educational technologies and interventions. The empirical evaluation and iteration of technologies involved three design experiments with end users, user interviews, and insights from pedagogy and user experience experts. Evidence from the iterative development of nQuire led to the production of a set of interaction design principles that aim to guide the development of online, learning-centred, citizen science projects. Eight design guidelines are proposed: users as producers of knowledge, topics before tools, mobile affordances, scaffolds to the process of scientific inquiry, learning by doing as key message, being part of a community as key message, every visit brings a reward, and value users and their time

    The Use of Focus Groups in Design Science Research

    Get PDF
    The majority of research within information systems (IS) may be categorized into two key perspectives, natural science and design science. While natural science seeks to develop and verify theories that explain phenomena, design science attempts to solve human and organizational problems through the creation of innovative artefacts. An important aspect of design science investigations is the evaluation of the design artefact. Focus groups are a well-established research approach in the social sciences. However, focus groups are rarely mentioned in the IS literature addressing design science evaluation methods. Given the increased interest in design science research, this paper reports on the successful application of focus groups in the evaluation of an IS design artefact. The paper discusses the objectives and design of the focus group sessions, participant selection, the role of the facilitator, the facility used for the sessions, and the data analysis procedures. The paper then provides a set of guidelines that should assist other IS design science researchers with focus group-based evaluation

    A Model of an E-Learning Web Site for Teaching and Evaluating Online

    Full text link
    This research is endeavoring to design an e-learning web site on the internet having the course name as "Object Oriented Programming" (OOP) for the students of level four at Computer Science Department (CSD). This course is to be taught online (through web) and then a programme is to be designed to evaluate students performance electronically while introducing a comparison between online teaching , e-evaluation and traditional methods of evaluation. The research seeks to lay out a futuristic perception that how the future online teaching and e-electronic evaluation should be the matter which highlights the importance of this research

    Introduction: Future pathways for science policy and research assessment: metrics vs peer review, quality vs impact

    Get PDF
    Copyright @ 2007 Beech Tree PublishingThe idea for this special issue arose from observing contrary developments in the design of national research assessment schemes in the UK and Australia during 2006 and 2007. Alternative pathways were being forged, determined, on the one hand, by the perceived relative merits of 'metrics' (quantitative measures of research performance) and peer judgement and, on the other hand, by the value attached to scientific excellence ('quality') versus usefulness ('impact'). This special issue presents a broad range of provocative academic opinion on preferred future pathways for science policy and research assessment. It unpacks the apparent dichotomies of metrics vs peer review and quality vs impact, and considers the hazards of adopting research evaluation policies in isolation from wider developments in scientometrics (the science of research evaluation) and divorced from the practical experience of other nations (policy learning)

    On Legitimacy: Designer as minor scientist

    Get PDF
    User experience research has recently been characterized in two camps, model-based and design-based, with contrasting approaches to measurement and evaluation. This paper argues that the two positions can be constructed in terms of Deleuze & Guattari’s “royal science” and “minor science”. It is argued that the “reinvention” of cultural probes is an example of a minor scientific methodology reconceptualised as a royal scientific “technology”. The distinction between royal and minor science provides insights into the nature of legitimacy within

    A Design Science Based Evaluation Framework for Patterns

    Get PDF
    Patterns were originally developed in the field of architecture as a mechanism for communicating good solutions to recurring classes of problems. Since then, many researchers and practitioners have created patterns to describe effective solutions to problems associated with disparate areas such as virtual project management, human-computer interaction, software development and engineering, and design science research. We believe that the development of patterns is a design science activity in which an artifact (i.e., a pattern) is created to communicate about and improve upon the current state-of-practice. Design science research has two critical components, creation and evaluation of an artifact. While many patterns have been created, few, if any, have been evaluated in this sense. In this paper, we propose a framework to evaluate patterns in any domain and provide examples of how to use the evaluation framework. This process of evaluation could help researchers refine extant patterns and improve the possibility of creating sustainable best practices for a given domain. We believe this evaluation framework begins an important dialogue related to the evaluation of patterns as artifacts of design science research. We draw upon the literature associated with patterns, evaluation of design science research, and research methods to develop this framework for evaluating patterns in a more consistent and rigorous manner

    Critique [of Ethical Problems in Evaluation Research]

    Get PDF
    “Ethical Problems in Evaluation Research” by Elisabeth J. Johnson summarizes some of the salient ethical concerns in social science research such as the relative positions of power between researcher and subject, confidentiality and privacy, and “political interests” or the use of research findings by sponsors. The author concludes with proposals and cautions; of special relevance to readers of this journal is Kelman’s “participatory research” which enables people being studied to participate in the research design and implementation

    Design Science Evaluation – Example of Experimental Design

    Get PDF
    Evaluation plays a major part in Design Science Research; however, researchers provide very little examples of how one could actually conduct this part at the operational research level. To address this need, we present an example of utility evaluation of design science artifact using an experimental design. We investigate whether an artifact as a treatment of for process development in design science research methodology improved the representational information quality of design science artifacts. The control condition is that each practitioner was presented with two artifacts in a basic two-condition repeated measures design. The improvement was measured after examining each artifact using paperquestionnaire. The paper presents DS researchers to numerous benefits that a simple experiment can provide
    corecore