15 research outputs found

    Negotiating Superfund Settlement Agreements

    Get PDF

    The Animal Welfare Act: Still a Cruelty to Animals

    Get PDF

    Breaking Barriers and Ending the Gauntlet

    Get PDF

    Making Research Data Accessible

    Get PDF
    This chapter argues that these benefits will accrue more quickly, and will be more significant and more enduring, if researchers make their data “meaningfully accessible.” Data are meaningfully accessible when they can be interpreted and analyzed by scholars far beyond those who generated them. Making data meaningfully accessible requires that scholars take the appropriate steps to prepare their data for sharing, and avail themselves of the increasingly sophisticated infrastructure for publishing and preserving research data. The better other researchers can understand shared data and the more researchers who can access them, the more those data will be re-used for secondary analysis, producing knowledge. Likewise, the richer an understanding an instructor and her students can gain of the shared data being used to teach and learn a particular research method, the more useful those data are for that pedagogical purpose. And the more a scholar who is evaluating the work of another can learn about the evidence that underpins its claims and conclusions, the better their ability to identify problems and biases in data generation and analysis, and the better informed and thus stronger an endorsement of the work they can offer

    Impact Metrics

    Get PDF
    Virtually every evaluative task in the academy involves some sort of metric (Elkana et al. 1978; Espeland & Sauder 2016; Gingras 2016; Hix 2004; Jensenius et al. 2018; Muller 2018; Osterloh and Frey 2015; Todeschini & Baccini 2016; Van Noorden 2010; Wilsdon et al. 2015). One can decry this development, and inveigh against its abuses and its over-use (as many of the foregoing studies do). Yet, without metrics, we would be at pains to render judgments about scholars, published papers, applications (for grants, fellowships, and conferences), journals, academic presses, departments, universities, or subfields. Of course, we also undertake to judge these issues ourselves through a deliberative process that involves reading the work under evaluation. This is the traditional approach of peer review. No one would advocate a system of evaluation that is entirely metric-driven. Even so, reading is time-consuming and inherently subjective; it is, after all, the opinion of one reader (or several readers, if there is a panel of reviewers). It is also impossible to systematically compare these judgments. To be sure, one might also read, and assess, the work of other scholars, but this does not provide a systematic basis for comparison – unless, that is, a standard metric(s) of comparison is employed. Finally, judging scholars through peer review becomes logistically intractable when the task shifts from a single scholar to a large group of scholars or a large body of work, e.g., a journal, a department, a university, a subfield, or a discipline. It is impossible to read, and assess, a library of work
    corecore