30 research outputs found

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Making Research Data Accessible

    Get PDF
    This chapter argues that these benefits will accrue more quickly, and will be more significant and more enduring, if researchers make their data “meaningfully accessible.” Data are meaningfully accessible when they can be interpreted and analyzed by scholars far beyond those who generated them. Making data meaningfully accessible requires that scholars take the appropriate steps to prepare their data for sharing, and avail themselves of the increasingly sophisticated infrastructure for publishing and preserving research data. The better other researchers can understand shared data and the more researchers who can access them, the more those data will be re-used for secondary analysis, producing knowledge. Likewise, the richer an understanding an instructor and her students can gain of the shared data being used to teach and learn a particular research method, the more useful those data are for that pedagogical purpose. And the more a scholar who is evaluating the work of another can learn about the evidence that underpins its claims and conclusions, the better their ability to identify problems and biases in data generation and analysis, and the better informed and thus stronger an endorsement of the work they can offer

    Impact Metrics

    Get PDF
    Virtually every evaluative task in the academy involves some sort of metric (Elkana et al. 1978; Espeland & Sauder 2016; Gingras 2016; Hix 2004; Jensenius et al. 2018; Muller 2018; Osterloh and Frey 2015; Todeschini & Baccini 2016; Van Noorden 2010; Wilsdon et al. 2015). One can decry this development, and inveigh against its abuses and its over-use (as many of the foregoing studies do). Yet, without metrics, we would be at pains to render judgments about scholars, published papers, applications (for grants, fellowships, and conferences), journals, academic presses, departments, universities, or subfields. Of course, we also undertake to judge these issues ourselves through a deliberative process that involves reading the work under evaluation. This is the traditional approach of peer review. No one would advocate a system of evaluation that is entirely metric-driven. Even so, reading is time-consuming and inherently subjective; it is, after all, the opinion of one reader (or several readers, if there is a panel of reviewers). It is also impossible to systematically compare these judgments. To be sure, one might also read, and assess, the work of other scholars, but this does not provide a systematic basis for comparison – unless, that is, a standard metric(s) of comparison is employed. Finally, judging scholars through peer review becomes logistically intractable when the task shifts from a single scholar to a large group of scholars or a large body of work, e.g., a journal, a department, a university, a subfield, or a discipline. It is impossible to read, and assess, a library of work
    corecore