401,653 research outputs found
Method or Madness? Inside the \u3ci\u3eUSNWR\u3c/i\u3e College Rankings
[Excerpt] U.S. News & World Report (USNWR) shook up the college guide industry when it began publishing its annual rankings of colleges in 1983. The summary of its annual rankings of colleges as undergraduate institutions that appear in a fall issue each year is by far the best selling issue of USNWR each year and, together with its more comprehensive annual America’s Best Colleges publication, it has become the “gold standard” of the college ranking business. USNWR’s rapid rise to the top derives from its rankings’ appearance of scientific objectivity (institutions are rated along various dimensions with explicit weights being assigned to each dimension), along with the fact that USNWR then ranks the top 50 institutions in each category (for example national universities and liberal arts colleges). Each year immediately before and after the USNWR college rankings issue hits the newsstand, stories about the USNWR rankings appear in virtually every major newspaper in the United States. I begin my remarks by discussing why Americans have become so preoccupied with the USNWR rankings and why higher education institutions have become equally obsessed with them. Next I discuss how the rankings methodology allows colleges and universities to take actions to manipulate their rankings and the effects that such actions have on higher education. I then ask if the rankings are flawed, why do colleges and universities continue to participate in them and I discuss some of the major problems with the ratings. Finally, I offer some brief concluding thoughts about how USNWR could alter its rating formula in ways that I believe would be socially desirable
Salience in Quality Disclosure: Evidence from the U.S. News College Rankings
How do rankings affect demand? This paper investigates the impact of college rankings, and the visibility of those rankings, on students' application decisions. Using natural experiments from U.S. News and World Report College Rankings, we present two main findings. First, we identify a causal impact of rankings on application decisions. When explicit rankings of colleges are published in U.S. News, a one-rank improvement leads to a 1-percentage-point increase in the number of applications to that college. Second, we show that the response to the information represented in rankings depends on the way in which that information is presented. Rankings have no effect on application decisions when colleges are listed alphabetically, even when readers are provided data on college quality and the methodology used to calculate rankings. This finding provides evidence that the salience of information is a central determinant of a firm's demand function, even for purchases as large as college attendance.
Ranking economics departments in terms of residual productivity: New Zealand economics departments, 2000‐2006
This paper considers a new approach for ranking the research productivity of academic departments. Our approach provides rankings in terms of residual research output after controlling for the key characteristics of each department’s academic staff. More specifically, we estimate residual research output rankings for all of New Zealand’s economics departments based on their publication performance over the 2000 to 2006 period. We do so after taking into account the following characteristics of each department’s academic staff: gender, experience, seniority, academic credentials, and academic rank. The paper concludes with a comparison of rankings generated by the residual research approach with those generated by traditional approaches to research rankings
WHO's Fooling Who? The World Health Organization's Problematic Ranking of Health Care Systems
The World Health Report 2000, prepared by the World Health Organization, presented performance rankings of 191 nations' health care systems. These rankings have been widely cited in public debates about health care, particularly by those interested in reforming the U.S. health care system to resemble more closely those of other countries. Michael Moore, for instance, famously stated in his film SiCKO that the United States placed only 37th in the WHO report. CNN.com, in verifying Moore's claim, noted that France and Canada both placed in the top 10. Those who cite the WHO rankings typically present them as an objective measure of the relative performance of national health care systems. They are not. The WHO rankings depend crucially on a number of underlying assumptions -- some of them logically incoherent, some characterized by substantial uncertainty, and some rooted in ideological beliefs and values that not everyone shares. The analysts behind the WHO rankings express the hope that their framework "will lay the basis for a shift from ideological discourse on health policy to a more empirical one." Yet the WHO rankings themselves have a strong ideological component. They include factors that are arguably unrelated to actual health performance, some of which could even improve in response to worse health performance. Even setting those concerns aside, the rankings are still highly sensitive to both measurement error and assumptions about the relative importance of the components. And finally, the WHO rankings reflect implicit value judgments and lifestyle preferences that differ among individuals and across countries
Reaching for the Brass Ring: The U.S. News & World Report Rankings and Competition
[Excerpt] The behavior of academic institutions, including the extent to which they collaborate on academic and nonacademic matters, is shaped by many factors. This paper focuses on one of these factors, the U.S. News & World Report (USNWR) annual ranking of the nation’s colleges and universities as undergraduate institutions, exploring how this ranking exacerbates the competitiveness among American higher education institutions. After presenting some evidence on the importance of the USNWR rankings to both public and private institutions at all levels along the selectivity spectrum, I describe how the rankings actually are calculated, then discuss how academic institutions alter their behavior to try to influence the rankings. While some of the actions an institution may take to improve its rankings may also make sense educationally, others may not and, more importantly, may not be in the best interest of the American higher educational system as a whole.
In the final section of the paper, I ask whether the methodology that USNWR uses to calculate its rankings prevents institutions from collaborating in ways that make sense both educationally and financially. My answer is, by and large, no, although I indicate that USNWR could encourage even more such collaborations by fine-tuning its rankings system. In short, although USNWR rankings cause institutions to worry more about the peers with which they compete than would otherwise be the case, the rankings should not prevent institutions from working productively towards common goals. Put another way, USNWR is not the “evil empire” and academic institutions should not blame USNWR for their failure to collaborate more
On Measuring the Complexity of Urban Living
This paper explores the concept of city ranking as a way to measure dynamics and complexities of urban life. These rankings have various dimensions and uses. Both the context in which these rankings are done, and their nature has changed considerably overtime. These rankings are also afflicted with many methodological and measurement problems. A review of major city rankings and related literature is carried out to suggest a framework for measuring Pakistani cities.Quality of Life, Cities, Urbanization
2015 State Rankings Data
This companion to the report Poor by Comparison contains state rankings on over 25 different indicators related to poverty
- …
