321,873 research outputs found

    List rankings and on-line list rankings of graphs

    Full text link
    A kk-ranking of a graph GG is a labeling of its vertices from {1,…,k}\{1,\ldots,k\} such that any nontrivial path whose endpoints have the same label contains a larger label. The least kk for which GG has a kk-ranking is the ranking number of GG, also known as tree-depth. The list ranking number of GG is the least kk such that if each vertex of GG is assigned a set of kk potential labels, then GG can be ranked by labeling each vertex with a label from its assigned list. Rankings model a certain parallel processing problem in manufacturing, while the list ranking version adds scheduling constraints. We compute the list ranking number of paths, cycles, and trees with many more leaves than internal vertices. Some of these results follow from stronger theorems we prove about on-line versions of list ranking, where each vertex starts with an empty list having some fixed capacity, and potential labels are presented one by one, at which time they are added to the lists of certain vertices; the decision of which of these vertices are actually to be ranked with that label must be made immediately.Comment: 16 pages, 3 figure

    Reaching for the Brass Ring: The U.S. News & World Report Rankings and Competition

    Get PDF
    [Excerpt] The behavior of academic institutions, including the extent to which they collaborate on academic and nonacademic matters, is shaped by many factors. This paper focuses on one of these factors, the U.S. News & World Report (USNWR) annual ranking of the nation’s colleges and universities as undergraduate institutions, exploring how this ranking exacerbates the competitiveness among American higher education institutions. After presenting some evidence on the importance of the USNWR rankings to both public and private institutions at all levels along the selectivity spectrum, I describe how the rankings actually are calculated, then discuss how academic institutions alter their behavior to try to influence the rankings. While some of the actions an institution may take to improve its rankings may also make sense educationally, others may not and, more importantly, may not be in the best interest of the American higher educational system as a whole. In the final section of the paper, I ask whether the methodology that USNWR uses to calculate its rankings prevents institutions from collaborating in ways that make sense both educationally and financially. My answer is, by and large, no, although I indicate that USNWR could encourage even more such collaborations by fine-tuning its rankings system. In short, although USNWR rankings cause institutions to worry more about the peers with which they compete than would otherwise be the case, the rankings should not prevent institutions from working productively towards common goals. Put another way, USNWR is not the “evil empire” and academic institutions should not blame USNWR for their failure to collaborate more

    Method or Madness? Inside the \u3ci\u3eUSNWR\u3c/i\u3e College Rankings

    Get PDF
    [Excerpt] U.S. News & World Report (USNWR) shook up the college guide industry when it began publishing its annual rankings of colleges in 1983. The summary of its annual rankings of colleges as undergraduate institutions that appear in a fall issue each year is by far the best selling issue of USNWR each year and, together with its more comprehensive annual America’s Best Colleges publication, it has become the “gold standard” of the college ranking business. USNWR’s rapid rise to the top derives from its rankings’ appearance of scientific objectivity (institutions are rated along various dimensions with explicit weights being assigned to each dimension), along with the fact that USNWR then ranks the top 50 institutions in each category (for example national universities and liberal arts colleges). Each year immediately before and after the USNWR college rankings issue hits the newsstand, stories about the USNWR rankings appear in virtually every major newspaper in the United States. I begin my remarks by discussing why Americans have become so preoccupied with the USNWR rankings and why higher education institutions have become equally obsessed with them. Next I discuss how the rankings methodology allows colleges and universities to take actions to manipulate their rankings and the effects that such actions have on higher education. I then ask if the rankings are flawed, why do colleges and universities continue to participate in them and I discuss some of the major problems with the ratings. Finally, I offer some brief concluding thoughts about how USNWR could alter its rating formula in ways that I believe would be socially desirable

    Rankings games

    Get PDF
    Research rankings based on publications and citations today dominate governance of academia. Yet they have unintended side effects on individual scholars and academic institutions and can be counterproductive. They induce a substitution of the “taste for science” by a “taste for publication”. We suggest as alternatives careful selection and socialization of scholars, supplemented by periodic self-evaluations and awards. Neither should rankings be a basis for the distributions of funds within universities. Rather, qualified individual scholars should be supported by basic funds to be able to engage in new and unconventional research topics and methods.Academic governance, rankings, motivation, selection, socialization

    WHO's Fooling Who? The World Health Organization's Problematic Ranking of Health Care Systems

    Get PDF
    The World Health Report 2000, prepared by the World Health Organization, presented performance rankings of 191 nations' health care systems. These rankings have been widely cited in public debates about health care, particularly by those interested in reforming the U.S. health care system to resemble more closely those of other countries. Michael Moore, for instance, famously stated in his film SiCKO that the United States placed only 37th in the WHO report. CNN.com, in verifying Moore's claim, noted that France and Canada both placed in the top 10. Those who cite the WHO rankings typically present them as an objective measure of the relative performance of national health care systems. They are not. The WHO rankings depend crucially on a number of underlying assumptions -- some of them logically incoherent, some characterized by substantial uncertainty, and some rooted in ideological beliefs and values that not everyone shares. The analysts behind the WHO rankings express the hope that their framework "will lay the basis for a shift from ideological discourse on health policy to a more empirical one." Yet the WHO rankings themselves have a strong ideological component. They include factors that are arguably unrelated to actual health performance, some of which could even improve in response to worse health performance. Even setting those concerns aside, the rankings are still highly sensitive to both measurement error and assumptions about the relative importance of the components. And finally, the WHO rankings reflect implicit value judgments and lifestyle preferences that differ among individuals and across countries

    Salience in Quality Disclosure: Evidence from the U.S. News College Rankings

    Get PDF
    How do rankings affect demand? This paper investigates the impact of college rankings, and the visibility of those rankings, on students' application decisions. Using natural experiments from U.S. News and World Report College Rankings, we present two main findings. First, we identify a causal impact of rankings on application decisions. When explicit rankings of colleges are published in U.S. News, a one-rank improvement leads to a 1-percentage-point increase in the number of applications to that college. Second, we show that the response to the information represented in rankings depends on the way in which that information is presented. Rankings have no effect on application decisions when colleges are listed alphabetically, even when readers are provided data on college quality and the methodology used to calculate rankings. This finding provides evidence that the salience of information is a central determinant of a firm's demand function, even for purchases as large as college attendance.

    Ranking economics departments in terms of residual productivity: New Zealand economics departments, 2000‐2006

    Get PDF
    This paper considers a new approach for ranking the research productivity of academic departments. Our approach provides rankings in terms of residual research output after controlling for the key characteristics of each department’s academic staff. More specifically, we estimate residual research output rankings for all of New Zealand’s economics departments based on their publication performance over the 2000 to 2006 period. We do so after taking into account the following characteristics of each department’s academic staff: gender, experience, seniority, academic credentials, and academic rank. The paper concludes with a comparison of rankings generated by the residual research approach with those generated by traditional approaches to research rankings

    "On the Robustness of Alternative Rankings Methodologies For Australian and New Zealand Economics Departments"

    Get PDF
    Just as friendly arguments based on an ignorance of facts eventually led to the creation of the definitive Guinness Book of World Records, any argument about university rankings has seemingly been a problem without a solution. To state the obvious, alternative rankings methodologies can and do lead to different rankings. This paper evaluates the robustness of rankings of Australian and New Zealand economics teaching departments for 1988-2002 and 1996-2002 using alternative rankings methodologies, and compares the results with the rankings obtained by Macri and Sinha (2006). In the overall mean rankings for both 1988- 2006 and 1996-2002, the University of Melbourne is ranked first, followed by UWA and ANU.
    • …
    corecore