12 research outputs found

    Privacy and Truthful Equilibrium Selection for Aggregative Games

    Full text link
    We study a very general class of games --- multi-dimensional aggregative games --- which in particular generalize both anonymous games and weighted congestion games. For any such game that is also large, we solve the equilibrium selection problem in a strong sense. In particular, we give an efficient weak mediator: a mechanism which has only the power to listen to reported types and provide non-binding suggested actions, such that (a) it is an asymptotic Nash equilibrium for every player to truthfully report their type to the mediator, and then follow its suggested action; and (b) that when players do so, they end up coordinating on a particular asymptotic pure strategy Nash equilibrium of the induced complete information game. In fact, truthful reporting is an ex-post Nash equilibrium of the mediated game, so our solution applies even in settings of incomplete information, and even when player types are arbitrary or worst-case (i.e. not drawn from a common prior). We achieve this by giving an efficient differentially private algorithm for computing a Nash equilibrium in such games. The rates of convergence to equilibrium in all of our results are inverse polynomial in the number of players nn. We also apply our main results to a multi-dimensional market game. Our results can be viewed as giving, for a rich class of games, a more robust version of the Revelation Principle, in that we work with weaker informational assumptions (no common prior), yet provide a stronger solution concept (ex-post Nash versus Bayes Nash equilibrium). In comparison to previous work, our main conceptual contribution is showing that weak mediators are a game theoretic object that exist in a wide variety of games -- previously, they were only known to exist in traffic routing games

    Transparency as Delayed Observability in Multi-Agent Systems

    Full text link
    Is transparency always beneficial in complex systems such as traffic networks and stock markets? How is transparency defined in multi-agent systems, and what is its optimal degree at which social welfare is highest? We take an agent-based view to define transparency (or its lacking) as delay in agent observability of environment states, and utilize simulations to analyze the impact of delay on social welfare. To model the adaptation of agent strategies with varying delays, we model agents as learners maximizing the same objectives under different delays in a simulated environment. Focusing on two agent types - constrained and unconstrained, we use multi-agent reinforcement learning to evaluate the impact of delay on agent outcomes and social welfare. Empirical demonstration of our framework in simulated financial markets shows opposing trends in outcomes of the constrained and unconstrained agents with delay, with an optimal partial transparency regime at which social welfare is maximal

    Market Making via Reinforcement Learning.

    Get PDF
    APPEAL FROM A JUDGMENT AND CONVICTION RENDERED AGAINST DEFENDANT/APPELLANT BY THE HONORABLE ROBERT F. OWENS, CIRCUIT COURT JUDGE OF THE FIFTH CIRCUIT COURT, ST. GEORGE DEPARTMENT IN AND FOR WASHINGTON COUNTY, STATE OF UTAH, AND APPEAL FROM THAT TRIAL COURT\u27S DENIAL OF A NEW TRIAL FOR DEFENDAN

    Алгоритмізація та оптимізація процесів біржової торгівлі

    Get PDF
    Дипломна робота першого (бакалаврського) рівня вищої освіти на тему: «Алгоритмізація та оптимізація процесів біржової торгівлі» містить 86 сторінок, 2 таблиць, 3 рисунки, 10 додатків. Перелік посилань нараховує 52 найменування. Метою роботи є розробка власного торговельного алгоритму, який має змогу отримувати найактуальніші дані, проводити операції купівлі та продажу активів, може використовувати різні торговельні стратегії та бути покращений. Об'єктом дипломної роботи є процес біржової торгівлі, його алгоритми та стратегії. Предмет дипломної роботи – розробка та оптимізація торговельних алгоритмів, використання різних методів та технологій для покращення ефективності та результативності біржової торгівлі. Методи дослідження. У процесі виконання дипломної роботи першого (бакалаврського) рівня вищої освіти застосовувалася сукупність загальних та специфічних методів аналізу. У процесі дослідження сучасних поглядів на алгоритмізацію та оптимізацію процесів біржової торгівлі застосовувалися: абстрагування, сходження від абстрактного до конкретного. Для отримання аналітичної інформації були використані дані статистичної звітності від провідних компаній-дослідників ринку та історичні дані ціни торговельних активів, надані біржею. Для побудови торговельного алгоритму – методи формалізації, експерименту, вимірювання, порівняння, аналіз та графічний метод. Теоретичною основою дослідження є роботи провідних вчених, що присвячені питанням торговельних алгоритмів, теоретичним підходам до алгоритмізації торговельних стратегій та використанню сучасних технологій таких, як машинне навчання та генетичні алгоритми, у торговельних алгоритмах. Результати роботи. За результатами проведеного аналізу та дослідження було розроблено торговельний алгоритм на основі торговельної стратегії SMA. На основі моделювання була проведена оптимізація торговельної стратегії за рахунок переходу до використання порогових значень розриву між ковзними середніми для прийняття рішень. Економічний ефект від впровадження оптимізованої – підвищення прибутковості алгоритму з 1.45% до 8.8%. Результатом є розроблений торговельний алгоритм, який приймає рішення на основі покращеної стратегії SMA, який може бути легко покращений та використовувати інші стратегії. Рекомендації щодо використання результатів роботи. Результати роботи та надані рекомендації можуть бути використані компаніями, які планують розробляти торговельні алгоритми, а також індивідуальними інвесторами як базу для їхніх розробок та подальшого покращення базового алгоритму та впровадження інших торговельних стратегій. Використання розробленого алгоритму не є інвестиційною порадою.Thesis of the first (bachelor's) level of higher education on «Algorithmization and optimization of stock trading processes» contains 86 pages, 2 tables, 3 figures, the list of links contains 52 items. The purpose of the work is to develop a proprietary trading algorithm capable of obtaining up-to-date data, conducting buying and selling operations, using various trading strategies, and being improved. The object of the work is the process of exchange trading, its algorithms, and strategies. The subject of the work is the development and optimization of trading algorithms, the use of different methods and technologies to improve the efficiency and effectiveness of exchange trading. Research methods. In the process of completing the bachelor's level diploma work, a combination of general and specific analysis methods was applied. The research on modern perspectives of algorithmization and optimization of exchange trading processes utilized abstraction and a progression from abstract to concrete. Analytical information was obtained from statistical reports by leading market research companies and historical price data of trading assets provided by the exchange. The construction of the trading algorithm involved formalization, experimentation, measurement, comparison, analysis, and graphical methods. The theoretical basis of the research includes works by leading scientists dedicated to trading algorithms, theoretical approaches to algorithmizing trading strategies, and the use of modern technologies such as machine learning and genetic algorithms in trading algorithms. Results of work. Based on the analysis and research conducted, a trading algorithm was developed using the SMA trading strategy. Through modeling, the optimization of the trading strategy was achieved by introducing threshold values for the gap between moving averages to make decisions. The economic effect of implementing the optimized strategy resulted in an increase in algorithm profitability from 1.45% to 8.8%. The outcome is a developed trading algorithm that makes decisions based on the improved SMA strategy and can be easily enhanced and adapted to other strategies. Recommendations for the use of work results. The findings and recommendations provided can be utilized by companies planning to develop trading algorithms, as well as individual investors, as a basis for their own developments and further enhancement of the basic algorithm and implementation of other trading strategies. The use of the developed algorithm does not constitute investment advice

    Computational Models of Algorithmic Trading in Financial Markets.

    Full text link
    Today's trading landscape is a fragmented and complex system of interconnected electronic markets in which algorithmic traders are responsible for the majority of trading activity. Questions about the effects of algorithmic trading naturally lend themselves to a computational approach, given the nature of the algorithms involved and the electronic systems in place for processing and matching orders. To better understand the economic implications of algorithmic trading, I construct computational agent-based models of scenarios with investors interacting with various algorithmic traders. I employ the simulation-based methodology of empirical game-theoretic analysis to characterize trader behavior in equilibrium under different market conditions. I evaluate the impact of algorithmic trading and market structure within three different scenarios. First, I examine the impact of a market maker on trading gains in a variety of environments. A market maker facilitates trade and supplies liquidity by simultaneously maintaining offers to buy and sell. I find that market making strongly tends to increase total welfare and the market maker is itself profitable. Market making may or may not benefit investors, however, depending on market thickness, investor impatience, and the number of trading opportunities. Second, I investigate the interplay between market fragmentation and latency arbitrage, a type of algorithmic trading strategy in which traders exercise superior speed in order to exploit price disparities between exchanges. I show that the presence of a latency arbitrageur degrades allocative efficiency in continuous markets. Periodic clearing at regular intervals, as in a frequent call market, not only eliminates the opportunity for latency arbitrage but also significantly improves welfare. Lastly, I study whether frequent call markets could potentially coexist alongside the continuous trading mechanisms employed by virtually all modern exchanges. I examine the strategic behavior of fast and slow traders who submit orders to either a frequent call market or a continuous double auction. I model this as a game of market choice, and I find strong evidence of a predator-prey relationship between fast and slow traders: the fast traders prefer to be with slower agents regardless of market, and slow traders ultimately seek the protection of the frequent call market.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120811/1/ewah_1.pd

    On the Aggregation of Subjective Inputs from Multiple Sources

    Get PDF
    When we have a population of individuals or artificially intelligent agents possessing diverse subjective inputs (e.g. predictions, opinions, etc.) about a common topic, how should we collect and combine them into a single judgment or estimate? This has long been a fundamental question across disciplines that concern themselves with forecasting and decision-making, and has attracted the attention of computer scientists particularly on account of the proliferation of online platforms for electronic commerce and the harnessing of collective intelligence. In this dissertation, I study this problem through the lens of computational social science in three main parts: (1) Incentives in information aggregation: In this segment, I analyze mechanisms for the elicitation and combination of private information from strategic participants, particularly crowdsourced forecasting tools called prediction markets. I show that (a) when a prediction market implemented with a widely used family of algorithms called market scoring rules (MSRs) interacts with myopic risk-averse traders, the price process behaves like an opinion pool, a classical family of belief combination rules, and (b) in an MSR-based game-theoretic model of prediction markets where participants can influence the predicted outcome but some of them have a non-zero probability of being non-strategic, the equilibrium is one of two types, depending on this probability -- either collusive and uninformative or partially revealing; (2) Aggregation with non-strategic agents: In this part, I am agnostic to incentive issues, and focus on algorithms that uncover the ground truth from a sequence of noisy versions. In particular, I present the design and analysis of an approximately Bayesian algorithm for learning a real-valued target given access only to censored Gaussian signals, that performs asymptotically almost as well as if we had uncensored signals; (3) Market making in practice: This component, although tied to the two previous themes, deals more directly with practical aspects of aggregation mechanisms. Here, I develop an adaptation of an MSR to a nancial market setting called a continuous double auction, and document its experimental evaluation in a simulated market ecosystem
    corecore