1,880 research outputs found

    Agent-based Modeling And Market Microstructure

    Get PDF
    In most modern financial markets, traders express their preferences for assets by making orders. These orders are either executed if a counterparty is willing to match them or collected in a priority queue, called a limit order book. Such markets are said to adopt an order-driven trading mechanism. A key question in this domain is to model and analyze the strategic behavior of market participants, in response to different definitions of the trading mechanism (e.g., the priority queue changed from the continuous double auctions to the frequent call market). The objective is to design financial markets where pernicious behavior is minimized.The complex dynamics of market activities are typically studied via agent-based modeling (ABM) methods, enriched by Empirical Game-Theoretic Analysis (EGTA) to compute equilibria amongst market players and highlight the market behavior (also known as market microstructure) at equilibrium. This thesis contributes to this research area by evaluating the robustness of this approach and providing results to compare existing trading mechanisms and propose more advanced designs.In Chapter 4, we investigate the equilibrium strategy profiles, including their induced market performance, and their robustness to different simulation parameters. For two mainstream trading mechanisms, continuous double auctions (CDAs) and frequent call markets (FCMs), we find that EGTA is needed for solving the game as pure strategies are not a good approximation of the equilibrium. Moreover, EGTA gives generally sound and robust solutions regarding different market and model setups, with the notable exception of agents’ risk attitudes. We also consider heterogeneous EGTA, a more realistic generalization of EGTA whereby traders can modify their strategies during the simulation, and show that fixed strategies lead to sufficiently good analyses, especially taking the computation cost into consideration.After verifying the reliability of the ABM and EGTA methods, we follow this research methodology to study the impact of two widely adopted and potentially malicious trading strategies: spoofing and submission of iceberg orders. In Chapter 5, we study the effects of spoofing attacks on CDA and FCM markets. We let one spoofer (agent playing the spoofing strategy) play with other strategic agents and demonstrate that while spoofing may be profitable in both market models, it has less impact on FCMs than on CDAs. We also explore several FCM mechanism designs to help curb this type of market manipulation even further. In Chapter 6, we study the impact of iceberg orders on the price and order flow dynamics in financial markets. We find that the volume of submitted orders significantly affects the strategy choice of the other agents and the market performance. In general, when agents observe a large volume order, they tend to speculate instead of providing liquidity. In terms of market performance, both efficiency and liquidity will be harmed. We show that while playing the iceberg-order strategy can alleviate the problem caused by the high-volume orders, submitting a large enough order and attracting speculators is better than taking the risk of having fewer trades executed with iceberg orders.We conclude from Chapters 5 and 6 that FCMs have some exciting features when compared with CDAs and focus on the design of trading mechanisms in Chapter 7. We verify that CDAs constitute fertile soil for predatory behavior and toxic order flows and that FCMs address the latency arbitrage opportunities built in those markets. This chapter studies the extent to which adaptive rules to define the length of the clearing intervals — that might move in sync with the market fundamentals — affect the performance of frequent call markets. We show that matching orders in accordance with these rules can increase efficiency and selfish traders’ surplus in a variety of market conditions. In so doing, our work paves the way for a deeper understanding of the flexibility granted by adaptive call markets

    Climate Change and Critical Agrarian Studies

    Full text link
    Climate change is perhaps the greatest threat to humanity today and plays out as a cruel engine of myriad forms of injustice, violence and destruction. The effects of climate change from human-made emissions of greenhouse gases are devastating and accelerating; yet are uncertain and uneven both in terms of geography and socio-economic impacts. Emerging from the dynamics of capitalism since the industrial revolution — as well as industrialisation under state-led socialism — the consequences of climate change are especially profound for the countryside and its inhabitants. The book interrogates the narratives and strategies that frame climate change and examines the institutionalised responses in agrarian settings, highlighting what exclusions and inclusions result. It explores how different people — in relation to class and other co-constituted axes of social difference such as gender, race, ethnicity, age and occupation — are affected by climate change, as well as the climate adaptation and mitigation responses being implemented in rural areas. The book in turn explores how climate change – and the responses to it - affect processes of social differentiation, trajectories of accumulation and in turn agrarian politics. Finally, the book examines what strategies are required to confront climate change, and the underlying political-economic dynamics that cause it, reflecting on what this means for agrarian struggles across the world. The 26 chapters in this volume explore how the relationship between capitalism and climate change plays out in the rural world and, in particular, the way agrarian struggles connect with the huge challenge of climate change. Through a huge variety of case studies alongside more conceptual chapters, the book makes the often-missing connection between climate change and critical agrarian studies. The book argues that making the connection between climate and agrarian justice is crucial

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Pacti: Scaling Assume-Guarantee Reasoning for System Analysis and Design

    Full text link
    Contract-based design is a method to facilitate modular system design. While there has been substantial progress on the theory of contracts, there has been less progress on scalable algorithms for the algebraic operations in this theory. In this paper, we present: 1) principles to implement a contract-based design tool at scale and 2) Pacti, a tool that can efficiently compute these operations. We then illustrate the use of Pacti in a variety of case studies

    Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in Multi-Agent RL

    Full text link
    Most existing works consider direct perturbations of victim's state/action or the underlying transition dynamics to show vulnerability of reinforcement learning agents under adversarial attacks. However, such direct manipulation may not always be feasible in practice. In this paper, we consider another common and realistic attack setup: in a multi-agent RL setting with well-trained agents, during deployment time, the victim agent Μ\nu is exploited by an attacker who controls another agent α\alpha to act adversarially against the victim using an \textit{adversarial policy}. Prior attack models under such setup do not consider that the attacker can confront resistance and thus can only take partial control of the agent α\alpha, as well as introducing perceivable ``abnormal'' behaviors that are easily detectable. A provable defense against these adversarial policies is also lacking. To resolve these issues, we introduce a more general attack formulation that models to what extent the adversary is able to control the agent to produce the adversarial policy. Based on such a generalized attack framework, the attacker can also regulate the state distribution shift caused by the attack through an attack budget, and thus produce stealthy adversarial policies that can exploit the victim agent. Furthermore, we provide the first provably robust defenses with convergence guarantee to the most robust victim policy via adversarial training with timescale separation, in sharp contrast to adversarial training in supervised learning which may only provide {\it empirical} defenses

    Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning

    Full text link
    We examine online safe multi-agent reinforcement learning using constrained Markov games in which agents compete by maximizing their expected total rewards under a constraint on expected total utilities. Our focus is confined to an episodic two-player zero-sum constrained Markov game with independent transition functions that are unknown to agents, adversarial reward functions, and stochastic utility functions. For such a Markov game, we employ an approach based on the occupancy measure to formulate it as an online constrained saddle-point problem with an explicit constraint. We extend the Lagrange multiplier method in constrained optimization to handle the constraint by creating a generalized Lagrangian with minimax decision primal variables and a dual variable. Next, we develop an upper confidence reinforcement learning algorithm to solve this Lagrangian problem while balancing exploration and exploitation. Our algorithm updates the minimax decision primal variables via online mirror descent and the dual variable via projected gradient step and we prove that it enjoys sublinear rate O((∣X∣+∣Y∣)LT(∣A∣+∣B∣))) O((|X|+|Y|) L \sqrt{T(|A|+|B|)})) for both regret and constraint violation after playing TT episodes of the game. Here, LL is the horizon of each episode, (∣X∣,∣A∣)(|X|,|A|) and (∣Y∣,∣B∣)(|Y|,|B|) are the state/action space sizes of the min-player and the max-player, respectively. To the best of our knowledge, we provide the first provably efficient online safe reinforcement learning algorithm in constrained Markov games.Comment: 59 pages, a full version of the main paper in the 5th Annual Conference on Learning for Dynamics and Contro

    Revisiting the capitalization of public transport accessibility into residential land value: an empirical analysis drawing on Open Science

    Get PDF
    Background: The delivery and effective operation of public transport is fundamental for a for a transition to low-carbon emission transport systems’. However, many cities face budgetary challenges in providing and operating this type of infrastructure. Land value capture (LVC) instruments, aimed at recovering all or part of the land value uplifts triggered by actions other than the landowner, can alleviate some of this pressure. A key element of LVC lies in the increment in land value associated with a particular public action. Urban economic theory supports this idea and considers accessibility to be a core element for determining residential land value. Although the empirical literature assessing the relationship between land value increments and public transport infrastructure is vast, it often assumes homogeneous benefits and, therefore, overlooks relevant elements of accessibility. Advancements in the accessibility concept in the context of Open Science can ease the relaxation of such assumptions. Methods: This thesis draws on the case of Greater Mexico City between 2009 and 2019. It focuses on the effects of the main public transport network (MPTN) which is organised in seven temporal stages according to its expansion phases. The analysis incorporates location based accessibility measures to employment opportunities in order to assess the benefits of public transport infrastructure. It does so by making extensive use of the open-source software OpenTripPlanner for public transport route modelling (≈ 2.1 billion origin-destination routes). Potential capitalizations are assessed according to the hedonic framework. The property value data includes individual administrative mortgage records collected by the Federal Mortgage Society (≈ 800,000). The hedonic function is estimated using a variety of approaches, i.e. linear models, nonlinear models, multilevel models, and spatial multilevel models. These are estimated by the maximum likelihood and Bayesian methods. The study also examines possible spatial aggregation bias using alternative spatial aggregation schemes according to the modifiable areal unit problem (MAUP) literature. Results: The accessibility models across the various temporal stages evidence the spatial heterogeneity shaped by the MPTN in combination with land use and the individual perception of residents. This highlights the need to transition from measures that focus on the characteristics of transport infrastructure to comprehensive accessibility measures which reflect such heterogeneity. The estimated hedonic function suggests a robust, positive, and significant relationship between MPTN accessibility and residential land value in all the modelling frameworks in the presence of a variety of controls. The residential land value increases between 3.6% and 5.7% for one additional standard deviation in MPTN accessibility to employment in the final set of models. The total willingness to pay (TWTP) is considerable, ranging from 0.7 to 1.5 times the equivalent of the capital costs of the bus rapid transit Line-7 of the MetrobĂșs system. A sensitivity analysis shows that the hedonic model estimation is sensitive to the MAUP. In addition, the use of a post code zoning scheme produces the closest results compared to the smallest spatial analytical scheme (0.5 km hexagonal grid). Conclusion: The present thesis advances the discussion on the capitalization of public transport on residential land value by adopting recent contributions from the Open Science framework. Empirically, it fills a knowledge gap given the lack of literature around this topic in this area of study. In terms of policy, the findings support LVC as a mechanism of considerable potential. Regarding fee-based LVC instruments, there are fairness issues in relation to the distribution of charges or exactions to households that could be addressed using location based measures. Furthermore, the approach developed for this analysis serves as valuable guidance for identifying sites with large potential for the implementation of development based instruments, for instance land readjustments or the sale/lease of additional development rights

    Opportunities and Challenges from Major Disasters Lessons Learned of Long-Term Recovery Group Members

    Get PDF
    Natural hazards caused by the alteration of weather patterns expose populations at risk, with an outcome of economic loss, property damage, personal injury, and loss of life. The unpredictability of disasters is a topic of concern to most governments. Disaster policies need more attention in aligning mitigation opportunities with disaster housing recovery (DHR). The effect of flooding, which primarily impacts housing in coastal areas, is one of the most serious issues associated with natural hazard. Flooding has a variety of causes and implications, especially for vulnerable populations who are exposed to it. DHR is complex, involving the need for effective coordination of resources, and labor. Understanding how the relationship between the build back better philosophy (i.e.: wherein the rebuild is intended to reduce future risk), the quality of the houses, and the income of the householder’s works is beneficial to prepare a resilient housing recovery plan. What are the main sources of obstacles experienced in the DHR process? How might outcomes be improved? This study attempts to answer those questions using data collection from Long-Term Recovery Group (LTRG) members in disaster areas. The analysis of LTRG member experiences provides a valuable perspective with the potential to improve the DHR process and mitigate future impacts. The goal is to understand and create awareness of factors impeding the recovery from previous disasters using the information obtained from the LTRG members to analyzed with various content analysis software to ascertain best practices to inform disaster policies for potential improvement of the recovery process. Using a content analysis technique provides a big picture of the main issues affecting the recovery. The key lessons learned from the LTRG members are that three major delay factors: planning, governance, and communication are impeding the improvement of the DHR process. It is essential to have an LTRG running before a disaster occurs -including a disaster plan focused on funding, labor, and resilient recovery. A more transparent governance – with some decentralization of the process, and more up-to-date disaster policies. A direct line of communication to overcome gaps including lack of communication and trusting in the process

    A Survey of Imitation Learning: Algorithms, Recent Developments, and Challenges

    Full text link
    In recent years, the development of robotics and artificial intelligence (AI) systems has been nothing short of remarkable. As these systems continue to evolve, they are being utilized in increasingly complex and unstructured environments, such as autonomous driving, aerial robotics, and natural language processing. As a consequence, programming their behaviors manually or defining their behavior through reward functions (as done in reinforcement learning (RL)) has become exceedingly difficult. This is because such environments require a high degree of flexibility and adaptability, making it challenging to specify an optimal set of rules or reward signals that can account for all possible situations. In such environments, learning from an expert's behavior through imitation is often more appealing. This is where imitation learning (IL) comes into play - a process where desired behavior is learned by imitating an expert's behavior, which is provided through demonstrations. This paper aims to provide an introduction to IL and an overview of its underlying assumptions and approaches. It also offers a detailed description of recent advances and emerging areas of research in the field. Additionally, the paper discusses how researchers have addressed common challenges associated with IL and provides potential directions for future research. Overall, the goal of the paper is to provide a comprehensive guide to the growing field of IL in robotics and AI.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    “The United States, it’s supposed to be where dreams come true” : rhizomatic familias, nested policy contexts, and the attendant shaping of undocumented and mixed status students’ lived experiences in North Carolina

    Get PDF
    This qualitative research study explores the nature of the nested contexts (historical, political, socio-cultural) within which migrant youth experience restrictive immigration policies in North Carolina, while also examining how these youth perceive and experience the enactment of these policies through an interpretive policy framework combined with a critical policy analysis lens. Spotlighting these experiences highlights not only the structural obstacles and challenges these youth and their families face both within educational settings and in their daily life, but, importantly, underscores their capacity for agency and function as policy actors with the ability to recreate policy meaning and effect transformative change. The findings of this study suggest a need for structural policy reform and a series of systemic reforms within K-12 educational settings in North Carolina to provide schools with institutional mechanisms of support to meet migrant students’ needs. The study also develops a new conceptual framework, rhizomatic familias, on the basis of Deleuze and Guattari’s (1987) concept of the rhizome to call attention to mixed status families’ uniform experiences of illegality
    • 

    corecore