39 research outputs found

    Statistical inference on the h-index with an application to top-scientist performance

    Full text link
    Despite the huge amount of literature on h-index, few papers have been devoted to the statistical analysis of h-index when a probabilistic distribution is assumed for citation counts. The present contribution relies on showing the available inferential techniques, by providing the details for proper point and set estimation of the theoretical h-index. Moreover, some issues on simultaneous inference - aimed to produce suitable scholar comparisons - are carried out. Finally, the analysis of the citation dataset for the Nobel Laureates (in the last five years) and for the Fields medallists (from 2002 onward) is proposed.Comment: 14 pages, 3 table

    Statistical analysis of the Hirsch Index

    Full text link
    The Hirsch index (commonly referred to as h-index) is a bibliometric indicator which is widely recognized as effective for measuring the scientific production of a scholar since it summarizes size and impact of the research output. In a formal setting, the h-index is actually an empirical functional of the distribution of the citation counts received by the scholar. Under this approach, the asymptotic theory for the empirical h-index has been recently exploited when the citation counts follow a continuous distribution and, in particular, variance estimation has been considered for the Pareto-type and the Weibull-type distribution families. However, in bibliometric applications, citation counts display a distribution supported by the integers. Thus, we provide general properties for the empirical h-index under the small- and large-sample settings. In addition, we also introduce consistent nonparametric variance estimation, which allows for the implemention of large-sample set estimation for the theoretical h-index

    Egyetemi rangsorok tudomĂĄnymetriai ĂŠs statisztikai megalapozĂĄssal

    Get PDF
    Az egyetemi rangsorok egyik, esetenként kizárólagosan meghatározó szempontja a tudományos teljesítmény. Ennek mérésére, elemzésére született a tudománymetria. A szerzők először röviden felvázolják e viszonylag fiatal tudományterület kezdeteit, majd néhány módszertani példát ismertetnek az MTA-PE Budapest Rangsor Kutatócsoport munkáiból, amelyek elsősorban az egyetemek tudományos teljesítményével függnek össze. A tanulmány a kutatócsoportnak azt a törekvését mutatja be, hogy olyan modern statisztikai módszereket dolgozzon ki és használjon, amelyek segítségével az egyetemi rangsorok alapvető hiányosságai kiküszöbölhetők vagy legalábbis enyhíthetők. A tudománymetriai mutatók szerepét négy területen vizsgálják: a globális egyetemi ligák kialakulását, a hazai hallgatói jelentkezését, az európai Erasmus utazásokét, valamint egy globális egyetemi rangsor top 100, illetve top 200 helyezését tekintve, mely utóbbi esetén a magyar egyetemek teljesítményét német és belga intézményekkel hasonlítják össze. Eredményeik azt mutatják, hogy a tudománymetriai mutatóknak leginkább az egyetemi ligák kialakulásában van szerepük, a felsőoktatásba való jelentkezésekre csak közvetve és időben egyre csökkenő jelentőséggel hatnak, az Erasmus hallgatói utazásokkal pedig nem függnek össze

    A Review of Theory and Practice in Scientometrics

    Get PDF
    Scientometrics is the study of the quantitative aspects of the process of science as a communication system. It is centrally, but not only, concerned with the analysis of citations in the academic literature. In recent years it has come to play a major role in the measurement and evaluation of research performance. In this review we consider: the historical development of scientometrics, sources of citation data, citation metrics and the “laws" of scientometrics, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments

    Tag based Bayesian latent class models for movies : economic theory reaches out to big data science

    Get PDF
    For the past 50 years, cultural economics has developed as an independent research specialism. At its core are the creative industries and the peculiar economics associated with them, central to which is a tension that arises from the notion that creative goods need to be experienced before an assessment can be made about the utility they deliver to the consumer. In this they differ from the standard private good that forms the basis of demand theory in economic textbooks, in which utility is known ex ante. Furthermore, creative goods are typically complex in composition and subject to heterogeneous and shifting consumer preferences. In response to this, models of linear optimization, rational addiction and Bayesian learning have been applied to better understand consumer decision- making, belief formation and revision. While valuable, these approaches do not lend themselves to forming verifiable hypothesis for the critical reason that they by-pass an essential aspect of creative products: namely, that of novelty. In contrast, computer sciences, and more specifically recommender theory, embrace creative products as a study object. Being items of online transactions, users of creative products share opinions on a massive scale and in doing so generate a flow of data driven research. Not limited by the multiple assumptions made in economic theory, data analysts deal with this type of commodity in a less constrained way, incorporating the variety of item characteristics, as well as their co-use by agents. They apply statistical techniques supporting big data, such as clustering, latent class analysis or singular value decomposition. This thesis is drawn from both disciplines, comparing models, methods and data sets. Based upon movie consumption, the work contrasts bottom-up versus top-down approaches, individual versus collective data, distance measures versus the utility-based comparisons. Rooted in Bayesian latent class models, a synthesis is formed, supported by the random utility theory and recommender algorithm methods. The Bayesian approach makes explicit the experience good nature of creative goods by formulating the prior uncertainty of users towards both movie features and preferences. The latent class method, thus, infers the heterogeneous aspect of preferences, while its dynamic variant- the latent Markov model - gets around one of the main paradoxes in studying creative products: how to analyse taste dynamics when confronted with a good that is novel at each decision point. Generated by mainly movie-user-rating and movie-user-tag triplets, collected from the Movielens recommender system and made available as open data for research by the GroupLens research team, this study of preference patterns formation for creative goods is drawn from individual level data

    The economics of federalism

    Get PDF
    An important development in public finance theory during recent years has been the emergence of the basic elements of a theory of fiscal federalism, based partly on the theory of public goods, partly on the theory of political process and partly on various aspects of location theory. The aim of the theory is to supply answers to basic and wide-ranging questions relating to the case for and the allocation of functions within a federal system, efficiency aspects of migration between jurisdictions, the case for different kinds of intergovernmental grants arrangements and the forms of debt and taxation arrangements appropriate to a federal structure. This volume gathers together most of the significant contributions to the theory, many of which are somewhat inaccessible. Although primarily concerned with federal constitutions, the book is relevant to the analysis of public policy under unitary constitutions which devolve decision-making autonomy to local or regional governments. It also reviews the current state of the art and thereby points out certain gaps that remain to be filled in the future

    Public Facility Location: Issues and Approaches

    Get PDF
    The papers collected in this issue were presented at the Task Force Meeting on Public Facility Location, held at IIASA in June 1980. The meeting was an important occasion for scientists with different backgrounds and nationalities to compare and discuss differences and similarities among their approaches to location problems. Unification and reconciliation of existing theories and methods was one of the leading themes of the meeting, and the papers collected here are part of the raw material to be used as a starting point towards this aim. The papers themselves provide a wide spectrum of approaches to both technical and substantive problems, for example, the way space is treated (continuously in Beckmann, in Mayhew, and in Thisse et al, discretely in all the others), the way customers are assigned to facilities (by behavioral models in Ermoliev and Leonardi, in Sheppard, and in Wilson, by normative rules in many others), the way the objective function is defined (ranging from total cost, to total profit, total expected utility for customers, accessibility, minimax distance, maximum covering, to a multi-objective treatment of all of them as in Revelle et al. There is indeed room for discussion, in order to find both similarities and weaknesses in different approaches. A general weakness of the current state of the art of location modeling may also be recognized: its general lack of realism relative to the political and institutional issues implied by locational decisions. This criticism, developed by Lea, might be used both as a concluding remark and as a proposal for new challenging research themes to scholars working in the field of location theory

    The Shareholder Value Revolution

    Get PDF
    My dissertation blends intellectual and business history to explain why American businesses chose to endorse the idea that corporations existed primarily to maximize the value of their shareholders’ investments during the final decades of the twentieth century. Shareholder value maximization was not just a financial strategy that corporate leaders adopted, it was an entirely different way of understanding what a corporation was and what its place in society should be. This new way of understanding the corporation was the product of decades of research in financial economics. Financial economists argued that corporations owed nothing to society beyond the maximization of shareholder value and that managers should focus on boosting their firms’ stock price above all else. In the 1980s and 1990s, activist investors, management consultants, legal theorists, and financial economists themselves used these ideas to justify hostile takeovers of businesses with low stock prices and to pressure managers to restructure their businesses around the promotion of shareholder value. Through an examination of these consultants’ methods and the theory behind them, this dissertation demonstrates how ideas in favor of shareholder value maximization redefined fundamental business concepts such as profit and value. This process of redefinition transformed actions once seen as signs of corporate failure like layoffs and divestitures into signs that managers were willing to make the difficult choices needed to redirect corporate funds away from workers and “unproductive” investment and to shareholders. Consultants and academics also encouraged companies’ boards of directors to grant top executives compensation in the form of stock options to encourage managers to engage in the often-painful sorts of restructurings needed to maximize shareholder value. Thanks to boards’ generosity with these stock options, executives were able to take home unprecedentedly large levels of compensation during the late 1980s and the 1990s, while being incentivized to shed their obligations to workers and to maximize value for shareholders. The root of contemporary concerns about economic inequality and a slowdown in economic growth trace back to these decisions that executives made to embrace shareholder value maximization
    corecore