326,991 research outputs found

    How to ask sensitive questions in conservation: A review of specialized questioning techniques

    Get PDF
    Tools for social research are critical for developing an understanding of conservation problems and assessing the feasibility of conservation actions. Social surveys are an essential tool frequently applied in conservation to assess both people’s behaviour and to understand its drivers. However, little attention has been given to the weaknesses and strengths of different survey tools. When topics of conservation concern are illegal or otherwise sensitive, data collected using direct questions are likely to be affected by non-response and social desirability biases, reducing their validity. These sources of bias associated with using direct questions on sensitive topics have long been recognised in the social sciences but have been poorly considered in conservation and natural resource management. We reviewed specialized questioning techniques developed in a number of disciplines specifically for investigating sensitive topics. These methods ensure respondent anonymity, increase willingness to answer, and critically, make it impossible to directly link incriminating data to an individual. We describe each method and report their main characteristics, such as data requirements, possible data outputs, availability of evidence that they can be adapted for use in illiterate communities, and summarize their main advantages and disadvantages. Recommendations for their application in conservation are given. We suggest that the conservation toolbox should be expanded by incorporating specialized questioning techniques, developed specifically to increase response accuracy. By considering the limitations of each survey technique, we will ultimately contribute to more effective evaluations of conservation interventions and more robust policy decisions

    REF 2014 : assessment framework and guidance on submissions

    Get PDF

    RankME: Reliable Human Ratings for Natural Language Generation

    Full text link
    Human evaluation for natural language generation (NLG) often suffers from inconsistent user ratings. While previous research tends to attribute this problem to individual user preferences, we show that the quality of human judgements can also be improved by experimental design. We present a novel rank-based magnitude estimation method (RankME), which combines the use of continuous scales and relative assessments. We show that RankME significantly improves the reliability and consistency of human ratings compared to traditional evaluation methods. In addition, we show that it is possible to evaluate NLG systems according to multiple, distinct criteria, which is important for error analysis. Finally, we demonstrate that RankME, in combination with Bayesian estimation of system quality, is a cost-effective alternative for ranking multiple NLG systems.Comment: Accepted to NAACL 2018 (The 2018 Conference of the North American Chapter of the Association for Computational Linguistics

    Monitoring Compliance with Open Access policies

    Get PDF
    In the last few years, academic communities have seen an increase in the number of Open Access (OA) policies being adopted at the institutional and funder levels. In parallel to policy implementation, institutions and funders have also been engaged in developing mechanisms to monitor academics and researchers compliance with the existing OA policies. This study highlights a few of the cases where compliance is being effectively monitored by institutions and funders. In the first section, Open Access is briefly overviewed and the rationale for monitoring OA policy compliance is explained. The second section looks at best practices in monitoring policy compliance with OA policies by funders and institutions. The case studies reflect on compliance with the UK Funding Councils and the USA National Institutes of Health OA policies. The third section makes recommendations on what processes and procedures universities and funders should adopt to monitor compliance with their OA policies. The final section recapitulates some of the key ideas related to monitoring policy compliance

    Amabile‘s consensual assessment technique: Why has it not been used more in design creativity research?

    Get PDF
    Amabile’s Consensual Assessment Technique (CAT) has been described as the “gold standard” of creativity assessment; been extensively used within creativity research, and is seen as the most popular method of assessing creative outputs. Its discussion within scholarly research has continued to grow year by year. However, since 1996, a systematic review of the CAT has not been undertaken, and, within design journals, appears not to have occurred, in relation to design, or more broadly, the creative industries in general. Yet, the consensus of domain judges is a prevalent methodology for design education, and professional design awards. This paper presents the findings from a systematic literature review of the CAT covering works from 1982 to 2011. It details key journals and authors publishing or citing CAT related studies, and highlights the limited number of CAT studies within design journals, with suggestions for why this may be the case

    Assessing the impact of health technology assessment in the Netherlands

    Get PDF
    Copyright © Cambridge University Press 2008Objectives: Investments in health research should lead to improvements in health and health care. This is also the remit of the main HTA program in the Netherlands. The aims of this study were to assess whether the results of this program have led to such improvements and to analyze how best to assess the impact from health research.Methods: We assessed the impact of individual HTA projects by adapting the "payback framework" developed in the United Kingdom. We conducted dossier reviews and sent a survey to principal investigators of forty-three projects awarded between 2000 and 2003. We then provided an overview of documented output and outcome that was assessed by ten HTA experts using a scoring method. Finally, we conducted five case studies using information from additional dossier review and semistructured key informant interviews.Results: The findings confirm that the payback framework is a useful approach to assess the impact of HTA projects. We identified over 101 peer reviewed papers, more than twenty-five PhDs, citations of research in guidelines (six projects), and implementation of new treatment strategies (eleven projects). The case studies provided greater depth and understanding about the levels of impact that arise and why and how they have been achieved.Conclusions: It is generally too early to determine whether the HTA program led to actual changes in healthcare policy and practice. However, the results can be used as a baseline measurement for future evaluation and can help funding organizations or HTA agencies consider how to assess impact, possibly routinely. This, in turn, could help inform research strategies and justify expenditure for health research.This research is funded by ZonMw, the Netherlands organization for health research and development (project 945-15-001)
    corecore