255 research outputs found

    The Incongruity Factor: Random Answers Without Questions

    Get PDF
    A transcript of an interview Mr. McCormick had with himself. He reports that the interview, on the whole, went smoothly and conformed, in general, to accepted theories of communications

    Marginal Costs and Editorial Spinoff: A Case History

    Get PDF
    Robert Wacker, the bigtime freelance writer who spoke at the Northeast AAACE meeting in New York, made a point that bears repeating

    Graduate Training and Research Productivity in the 1990s: A Look at Who Publishes

    Get PDF
    The relationship between reputational rankings of political science departments and their scholarly productivity remains a source of discussion and controversy. After the National Research Council (1995) published its ranking of 98 political science departments, Katz and Eagles (1996), Jackman and Siverson (1996), and Lowry and Silver (1996) analyzed the factors that seemingly influenced those rankings. Miller, Tien, and Peebler (1996) offered an alternate approach to ranking departments, based both upon the number of faculty (and their graduates) who published in the American Political Science Review and upon the number of citations that faculty members received. More recently, two studies have examined departmental rankings in other ways. Ballard and Mitchell (1998) assessed political science departments by evaluating the level of productivity in nine important disciplinary and subfield journals, and Garand and Graddy (1999) evaluated the impact of journal publications (and other variables) on the rankings of political science departments. In general, Miller, Tien, and Peebler found a high level of correspondence between reputation rankings and productivity, Ballard and Mitchell did not, and Garand and Graddy found that publications in “high impact” journals were important for departmental rankings

    Graduate Training, Current Affiliation and Publishing Books in Political Science

    Get PDF
    Scores of studies have measured the quality of political science departments. Generally speaking, these studies have taken two forms. Many have relied on scholars\u27 survey responses to construct rankings of the major departments. For example, almost 50 years ago Keniston (1957) interviewed 25 department chairpersons and asked them to assess the quality of various programs, and, much more recently, the National Research Council (NRC 1995) asked 100 political scientists to rate the “scholarly quality of program faculty” in the nation\u27s political science doctoral departments. In response to these opinion-based rankings, a number of researchers have developed what they claim to be more objective measures of department quality based on the research productivity of the faculty (Ballard and Mitchell 1998; Miller, Tien, and Peebler 1996; Robey 1979). While department rankings using these two methods are often similar, there are always noteworthy differences and these have generated an additional literature that explores the relationship between the rating systems (Garand and Graddy 1999; Jackman and Siverson 1996; Katz and Eagles 1996; Miller, Tien, and Peebler 1996)

    Pledge to Progress? Analyzing the Impact of the BLM Movement on Racial Mortgage Approval Rate Gaps

    Get PDF
    Following the surge of Black Lives Matter protests in 2020, prominent financial institutions announced their commitment to improving racial disparities in homeownership. Using the HMDA dataset from 2019-2022, this paper investigates the difference in home-loan approval rates between white and black borrowers in Ohio post Black Lives Matter movement using bank fixed effects. We found a statistically significant reduction in the approval rate gap between black and white borrowers post 2020
    corecore