21 research outputs found

    Teleconference versus face-to-face scientific peer review of grant application: effects on review outcomes.

    Get PDF
    Teleconferencing as a setting for scientific peer review is an attractive option for funding agencies, given the substantial environmental and cost savings. Despite this, there is a paucity of published data validating teleconference-based peer review compared to the face-to-face process. Our aim was to conduct a retrospective analysis of scientific peer review data to investigate whether review setting has an effect on review process and outcome measures. We analyzed reviewer scoring data from a research program that had recently modified the review setting from face-to-face to a teleconference format with minimal changes to the overall review procedures. This analysis included approximately 1600 applications over a 4-year period: two years of face-to-face panel meetings compared to two years of teleconference meetings. The average overall scientific merit scores, score distribution, standard deviations and reviewer inter-rater reliability statistics were measured, as well as reviewer demographics and length of time discussing applications. The data indicate that few differences are evident between face-to-face and teleconference settings with regard to average overall scientific merit score, scoring distribution, standard deviation, reviewer demographics or inter-rater reliability. However, some difference was found in the discussion time. These findings suggest that most review outcome measures are unaffected by review setting, which would support the trend of using teleconference reviews rather than face-to-face meetings. However, further studies are needed to assess any correlations among discussion time, application funding and the productivity of funded research projects

    Comparison of Reviewer Degrees.

    No full text
    <p>Relative proportions of reviewer degrees for each year (MD, MD/PhD, and PhD).</p

    Comparison of Average Overall Score (AOS).

    No full text
    <p>Average score comparison between 2009, 2010 (face-to-face) and 2011, 2012 (teleconference) reviews. The total numbers of applications reviewed were 669, 291, 347, and 297 for 2009, 2010, 2011, and 2012, respectively.</p

    Comparison of Intraclass Correlation.

    No full text
    <p>Intraclass correlation for 2009, 2010 (face-to-face) and 2011, 2012 (teleconference) reviews (p<0.01 for all years).</p

    Comparison of Reviewer Seniority.

    No full text
    <p>Relative proportion of reviewers in terms of seniority for each year. The senior academic level grouping included full professors, chairs, deans, and/or directors, the intermediate level grouping included associate professors, and the junior level grouping included assistant professors or equivalents.</p

    Comparison of Overall Score (OS) Distribution.

    No full text
    <p>Overall score (OS) distribution for all applications from 2009, 2010 (face-to-face) and from 2011, 2012 (teleconference) peer reviews.</p

    Comparison of Average Standard Deviation of Individual Reviewer Merit Scores.

    No full text
    <p>Average standard deviation of individual reviewer merit scores per application, comparing 2009, 2010 (face-to-face) and 2011, 2012 (teleconference) reviews.</p

    The Validation of Peer Review through Research Impact Measures and the Implications for Funding Strategies

    No full text
    <div><p>There is a paucity of data in the literature concerning the validation of the grant application peer review process, which is used to help direct billions of dollars in research funds. Ultimately, this validation will hinge upon empirical data relating the output of funded projects to the predictions implicit in the overall scientific merit scores from the peer review of submitted applications. In an effort to address this need, the American Institute of Biological Sciences (AIBS) conducted a retrospective analysis of peer review data of 2,063 applications submitted to a particular research program and the bibliometric output of the resultant 227 funded projects over an 8-year period. Peer review scores associated with applications were found to be moderately correlated with the total time-adjusted citation output of funded projects, although a high degree of variability existed in the data. Analysis over time revealed that as average annual scores of all applications (both funded and unfunded) submitted to this program improved with time, the average annual citation output per application increased. Citation impact did not correlate with the amount of funds awarded per application or with the total annual programmatic budget. However, the number of funded applications per year was found to correlate well with total annual citation impact, suggesting that improving funding success rates by reducing the size of awards may be an efficient strategy to optimize the scientific impact of research program portfolios. This strategy must be weighed against the need for a balanced research portfolio and the inherent high costs of some areas of research. The relationship observed between peer review scores and bibliometric output lays the groundwork for establishing a model system for future prospective testing of the validity of peer review formats and procedures.</p></div

    Total Annual TRC Level Versus Number of Submitted (N<sub>s</sub>) Applications per Year.

    No full text
    <p>Total annual TRC values were plotted against the corresponding total number of applications submitted for each year and fit to a linear function.</p

    Total Annual TRC Versus AAS.

    No full text
    <p>Total annual TRC values were plotted against AAS of submitted applications and then fit to a linear function.</p
    corecore