190 research outputs found

    Matrices of forests, analysis of networks, and ranking problems

    Get PDF
    The matrices of spanning rooted forests are studied as a tool for analysing the structure of networks and measuring their properties. The problems of revealing the basic bicomponents, measuring vertex proximity, and ranking from preference relations / sports competitions are considered. It is shown that the vertex accessibility measure based on spanning forests has a number of desirable properties. An interpretation for the stochastic matrix of out-forests in terms of information dissemination is given.Comment: 8 pages. This article draws heavily from arXiv:math/0508171. Published in Proceedings of the First International Conference on Information Technology and Quantitative Management (ITQM 2013). This version contains some corrections and addition

    Journal ranking should depend on the level of aggregation

    Get PDF
    Journal ranking is becoming more important in assessing the quality of academic research. Several indices have been suggested for this purpose, typically on the basis of a citation graph between the journals. We follow an axiomatic approach and find an impossibility theorem: any self-consistent ranking method, which satisfies a natural monotonicity property, should depend on the level of aggregation. Our result presents a trade-off between two axiomatic properties and reveals a dilemma of aggregation.Comment: 10 pages, 2 figure

    Essays on Empirical Game Theory

    Get PDF
    This thesis is a collection of studies about ratings and rankings and a study about pricing of cryptocurrencies. In the first chapter, jointly with Peter Duersch and Jörg Oechssler we develop a universal measure of skill and chance in games. Online and offline gaming has become a multibillion dollar industry, yet, games of chance (in contrast to games of skill) are prohibited or tightly regulated in many jurisdictions. Thus, the question whether a game predominantly depends on skill or chance has important legal and regulatory implications. We suggest a new empirical criterion for distinguishing games of skill from games of chance. All players are ranked according to a “best-fit” Elo algorithm. The wider the distribution of player ratings are in a game, the more important is the role of skill. Most importantly, we provide a new benchmark (“50%-chess”) that allows to decide whether games predominantly depend on chance, as this criterion is often used by courts. We apply the method to large datasets of various games (e.g. chess, poker, backgammon). Our findings indicate that most popular online games, including poker, are below the threshold of 50% skill and thus depend predominantly on chance. In fact, poker contains about as much skill as chess when 75% of the chess results are replaced by a coin flip. The second chapter aims to measure skill and chance in different versions of online poker, using the best-fit Elo algorithm established in the first chapter. While Texas Hold’em arguably is the most popular version being played, the amount of skill involved might differ from other versions like Omaha Hold’em. Many platforms offer faster procedures to play (e.g. ”hyper turbo”), as well as different levels of stakes. Given the richness of online poker data, it is possible to isolate the impact of these variations individually. The heterogeneity of best-fit Elo ratings decreases in quicker competitions or with higher stakes. Meanwhile, Omaha seems to contain more elements of skill than Texas Hold’em, as its analysis shows a wider distribution of skill levels of players. The third chapter motivates the introduction of the notion of Independece of Alternatives (IoA) in the context of ranking models. IoA postulates a property of independence which seems intuitively reasonable but does not exclusively hold in models where Luce’s Choice Axiom applies. Assuming IoA, expected ranks in the ranking of multiple alternatives can be determined from pairwise comparisons. The result can significantly simplify the calculation of expected ranks in practice and potentially facilitate analytic methods that build on more general approaches to model the ranking of multiple alternatives. The fourth chapter describes an experimental study on cryptocurrency markets. Jointly with Andis Sofianos and Yilong Xu we focus on potential effects of mining on pricing. Recent years have seen an emergence of decentralized cryptocurrencies that were initially devised as a payment system, but are increasingly being recognized as investment instruments. The price trajectories of cryptocurrencies have raised questions among economists and policy makers, especially since such markets can have spillover effects on the real economy. We focus on two key properties of cryptocurrencies that may contribute to their pricing. In a controlled lab setting, we test whether pricing is influenced by costly mining, as well as entry barriers to mining technology. Our mining design resembles the proof-of-work algorithm employed by the majority of cryptocurrencies, such as bitcoin. In our second treatment, half of the traders have access to the mining technology, while the other half can only participate in the market. This is designed to resemble the high entry cost to initiate cryptocurrency mining. In the absence of mining, no bubbles or crashes occur. When costly mining is introduced, assets are traded at prices more than 300% higher than fundamental value and the bubble peaks relatively late in the trading periods. When only half of the traders can mine, prices surge much earlier and reach values of more than 400% higher than fundamental value at the peak of the market. Overall, the proof-of-work algorithm seems to fuel overpricing, which in conjunction with entry barriers to mining is intensified

    Proceedings of Mathsport international 2017 conference

    Get PDF
    Proceedings of MathSport International 2017 Conference, held in the Botanical Garden of the University of Padua, June 26-28, 2017. MathSport International organizes biennial conferences dedicated to all topics where mathematics and sport meet. Topics include: performance measures, optimization of sports performance, statistics and probability models, mathematical and physical models in sports, competitive strategies, statistics and probability match outcome models, optimal tournament design and scheduling, decision support systems, analysis of rules and adjudication, econometrics in sport, analysis of sporting technologies, financial valuation in sport, e-sports (gaming), betting and sports

    Allocation of Public Resources: Bringing Order to Chaos

    Get PDF
    Science Olympiad (SO) is a team-based academic competition involving multiple subject areas (Events) with arcane rules governing the team composition. Add to the mix parental contention over which student(s) get on the “All-Star” team, and you have a potentially explosive situation. This project brings order and logic to school-based SO programs and defuses tense milestones through the implementation of an institutional structure that: assigns students to Events based on solicited student preferences for the Events, collects objective student performance data, composes competitive teams based on student performance (aka “Moneyball”), and brings transparency to the Team Selection process through crowdsourcing. The Event Assignment mechanism is simple, fast, easy to understand, and yields Pareto-optimal results based on student preferences, without the exchange of money or tokens, and with effectively no incentive to game the system. The Team Selection mechanism optimizes student performance data from teachers (Event Coaches) and competitions to compose a tiered series of teams with the greatest potential performance. And the Crowdsource Tool allows any stakeholder to compose a candidate team for advancing to the State competition, where the team with the highest potential performance score advances to State whether the team was composed with the Crowdsource Tool or by the Team Selection algorithm. The end result is that students get more of the Events that they want; Team Selection is transparent and far less contentious; teams are higher quality; and managing the SO program for a school takes considerably less time and effort

    Essays on cost and surplus sharing games and contest theory

    Get PDF
    This thesis contains three essays on cost and surplus-sharing games. In the first chapter, Lorenz comparison between increasing serial and Shapley value cost-sharing rule, I consider the cost-sharing problem using a cooperative approach and compare the Shapley value and the Moulin-Shenker’s (Increasing) serial rule in the Lorenz sense. The result allows me to provide the complete ordering in inequality among the four popular sharing rules: The average share, the Shapley value, the Increasing serial, and the Decreasing serial sharing rule. In the later two chapters, I study the Tullock contest, which is a different interpretation of the non-cooperative surplus sharing game with the average rule. My second chapter, Underperforming and Outperforming contestant: Who to support?, consider a repeated Tullock contest where the designer has the option to favour either the early winner or the loser in the subsequent round, aiming to maximise the total effort. In the later section of the chapter, the contest designer is granted the additional capability to determine the prize distribution between the two stages. My work focuses on the optimal biasing decision and allocation of prizes to achieve the highest level of effort. In the third chapter, Benefits of Intermediate Competition with Non-Monetary Incentive, the underlying assumption is that the designer places importance not only on the total effort invested in the contest but also on the contest’s selection accuracy. I focus on investigating the impact of introducing an intermediate stage with a non-monetary incentive on the contest’s performance in both dimensions. By analysing the contest from this perspective, my work provides insights into the potential advantages and enhancements that can be achieved through the inclusion of intermediate competition and non-monetary incentives

    A Statistical Investigation into Factors Affecting Results of One Day International Cricket Matches

    Get PDF
    The effect of playing “home” or “away” and many other factors, such as batting first or second, winning or losing the toss, have been hypothesised as influencing the outcome of major cricket matches. Anecdotally, it has often been noted that Subcontinental sides (India, Pakistan, Sri Lanka and Bangladesh) tend to perform much better on the Subcontinent than away from it, whilst England do better in Australia during cooler, damper Australian Summers than during hotter, drier ones. In this paper, focusing on results of men’s One Day International (ODI) matches involving England, we investigate the extent to which a number of factors – including playing home or away (or the continent of the venue), batting or fielding first, winning or losing the toss, the weather conditions during the game, the condition of the pitch, and the strength of each team’s top batting and bowling resources – influence the outcome of matches. By employing a variety of Statistical techniques, we find that the continent of the venue does appear to be a major factor affecting the result, but winning the toss does not. We then use the factors identified as significant in an attempt to build a Binary Logistic Regression Model that will estimate the probability of England winning at various stages of a game. Finally, we use this model to predict the results of some England ODI games not used in training the model
    corecore