18 research outputs found

    How Market Ecology Explains Market Malfunction

    Full text link
    Standard approaches to the theory of financial markets are based on equilibrium and efficiency. Here we develop an alternative based on concepts and methods developed by biologists, in which the wealth invested in a financial strategy is like the abundance of a species. We study a toy model of a market consisting of value investors, trend followers and noise traders. We show that the average returns of strategies are strongly density dependent, i.e. they depend on the wealth invested in each strategy at any given time. In the absence of noise the market would slowly evolve toward an efficient equilibrium, but the statistical uncertainty in profitability (which is adjusted to match real markets) makes this noisy and uncertain. Even in the long term, the market spends extended periods of time away from perfect efficiency. We show how core concepts from ecology, such as the community matrix and food webs, give insight into market behavior. The wealth dynamics of the market ecology explain how market inefficiencies spontaneously occur and gives insight into the origins of excess price volatility and deviations of prices from fundamental values.Comment: 9 pages, 5 figures, Conference on Evolutionary Models of Financial Markets, includes responses to reviewer

    Digital Twins:State of the Art Theory and Practice, Challenges, and Open Research Questions

    Get PDF
    Digital Twin was introduced over a decade ago, as an innovative all-encompassing tool, with perceived benefits including real-time monitoring, simulation and forecasting. However, the theoretical framework and practical implementations of digital twins (DT) are still far from this vision. Although successful implementations exist, sufficient implementation details are not publicly available, therefore it is difficult to assess their effectiveness, draw comparisons and jointly advance the DT methodology. This work explores the various DT features and current approaches, the shortcomings and reasons behind the delay in the implementation and adoption of digital twin. Advancements in machine learning, internet of things and big data have contributed hugely to the improvements in DT with regards to its real-time monitoring and forecasting properties. Despite this progress and individual company-based efforts, certain research gaps exist in the field, which have caused delay in the widespread adoption of this concept. We reviewed relevant works and identified that the major reasons for this delay are the lack of a universal reference framework, domain dependence, security concerns of shared data, reliance of digital twin on other technologies, and lack of quantitative metrics. We define the necessary components of a digital twin required for a universal reference framework, which also validate its uniqueness as a concept compared to similar concepts like simulation, autonomous systems, etc. This work further assesses the digital twin applications in different domains and the current state of machine learning and big data in it. It thus answers and identifies novel research questions, both of which will help to better understand and advance the theory and practice of digital twins

    Supply Chain Digital Twin Framework Design:An Approach of Supply Chain Operations Reference Model and System of Systems

    Get PDF
    Digital twin technology has been regarded as a beneficial approach in supply chain development. Different from traditional digital twin (temporal dynamic), supply chain digital twin is a spatio-temporal dynamic system. This paper explains what is 'twined' in supply chain digital twin and how to 'twin' them to handle the spatio-temporal dynamic issue. A supply chain digital twin framework is developed based on the theories of system of systems and supply chain operations reference model. This framework is universal and can be applied in various types of supply chain systems. We firstly decompose the supply chain system into unified standard blocks preparing for the adoption of digital twin. Next, the idea of supply chain operations reference model is adopted to digitise basic supply chain activities within each block and explain how to use existing information system. Then, individual sub-digital twin is established for each member in supply chain system. After that, we apply the concept of system of systems to integrate and coordinate sub-digital twin into supply chain digital twin from the views of supply chain business integration and information system integration. At last, one simple supply chain system is applied to illustrate the application of the proposed model

    A multi-objective combinatorial optimisation framework for large scale hierarchical population synthesis

    Get PDF
    In agent-based simulations, synthetic populations of agents are commonly used to represent the structure, behaviour, and interactions of individuals. However, generating a synthetic population that accurately reflects real population statistics is a challenging task, particularly when performed at scale. In this paper, we propose a multi objective combinatorial optimisation technique for large scale population synthesis. We demonstrate the effectiveness of our approach by generating a synthetic population for selected regions and validating it on contingency tables from real population data. Our approach supports complex hierarchical structures between individuals and households, is scalable to large populations and achieves minimal contigency table reconstruction error. Hence, it provides a useful tool for policymakers and researchers for simulating the dynamics of complex populations

    Exploring the dynamics of gene drive mosquitoes within wild populations using an agent-based simulation

    Get PDF
    Gene drive technology is emerging as a potentially powerful tool in combating vector-borne diseases – notably malaria. This study introduces an agent-based model (ABM) focused on the deployment of genetically engineered mosquitoes with gene drive (GEM) in Príncipe Island, Republic of São Tomé and Príncipe, an island nation in the Gulf of Guinea, West Africa. Grounded in empirical data from laboratory and field studies, our model forecasts the dynamics of mosquito populations central to devising efficacious GEM release strategies. The core objective is to evaluate the time required for GEMs to constitute 90% of the mosquito population and to elucidate their dispersal throughout the island. This research is instrumental in understanding GEM potential in controlling malaria vectors

    Bayesian calibration of differentiable agent-based models

    Full text link
    Agent-based modelling (ABMing) is a powerful and intuitive approach to modelling complex systems; however, the intractability of ABMs' likelihood functions and the non-differentiability of the mathematical operations comprising these models present a challenge to their use in the real world. These difficulties have in turn generated research on approximate Bayesian inference methods for ABMs and on constructing differentiable approximations to arbitrary ABMs, but little work has been directed towards designing approximate Bayesian inference techniques for the specific case of differentiable ABMs. In this work, we aim to address this gap and discuss how generalised variational inference procedures may be employed to provide misspecification-robust Bayesian parameter inferences for differentiable ABMs. We demonstrate with experiments on a differentiable ABM of the COVID-19 pandemic that our approach can result in accurate inferences, and discuss avenues for future work.Comment: Accepted for Oral Presentation at the AI4ABM Workshop at ICLR 202

    BlackBIRDS: Black-Box Inference foR Differentiable Simulators

    Get PDF
    BlackBIRDS is a Python package consisting of generically applicable, black-box inference methods for differentiable simulation models. It facilitates both (a) the differentiable implementation of simulation models by providing a common object-oriented framework for their implementation in PyTorch (Paszke et al., 2019), and (b) the use of a variety of gradient-assisted inference procedures for these simulation models, allowing researchers to easily exploit the differentiable nature of their simulator in parameter estimation tasks. The package consists of both Bayesian and non-Bayesian inference methods, and relies on well-supported software libraries (e.g., normflows, Stimper et al., 2023) to provide this broad functionality

    Generative AI for End-to-End Limit Order Book Modelling: A Token-Level Autoregressive Generative Model of Message Flow Using a Deep State Space Network

    Full text link
    Developing a generative model of realistic order flow in financial markets is a challenging open problem, with numerous applications for market participants. Addressing this, we propose the first end-to-end autoregressive generative model that generates tokenized limit order book (LOB) messages. These messages are interpreted by a Jax-LOB simulator, which updates the LOB state. To handle long sequences efficiently, the model employs simplified structured state-space layers to process sequences of order book states and tokenized messages. Using LOBSTER data of NASDAQ equity LOBs, we develop a custom tokenizer for message data, converting groups of successive digits to tokens, similar to tokenization in large language models. Out-of-sample results show promising performance in approximating the data distribution, as evidenced by low model perplexity. Furthermore, the mid-price returns calculated from the generated order flow exhibit a significant correlation with the data, indicating impressive conditional forecast performance. Due to the granularity of generated data, and the accuracy of the model, it offers new application areas for future work beyond forecasting, e.g. acting as a world model in high-frequency financial reinforcement learning applications. Overall, our results invite the use and extension of the model in the direction of autoregressive large financial models for the generation of high-frequency financial data and we commit to open-sourcing our code to facilitate future research

    Population synthesis as scenario generation for simulation-based planning under uncertainty

    Get PDF
    Agent-based models have the potential to become instrumental tools in real-world decision-making, equipping policy-makers with the ability to experiment with high-fidelity representations of complex systems. Such models often rely crucially on the generation of synthetic populations with which the model is simulated, and their behaviour can depend strongly on the population's composition. Existing approaches to synthesising populations attempt to model distributions over agent-level attributes on the basis of data collected from a real-world population. Unfortunately, these approaches are of limited utility when data is incomplete or altogether absent - such as during novel, unprecedented circumstances - so that considerable uncertainty regarding the characteristics of the population being modelled remains, even after accounting for any such data. What is therefore needed in these cases are tools to simulate and plan for the possible future behaviours of the complex system that can be generated by populations that are consistent with this remaining uncertainty. To this end, we frame the problem of synthesising populations in agent-based models as a problem of scenario generation. The framework that we present is designed to generate synthetic populations that are on the one hand consistent with any persisting uncertainty, while on the other hand matching closely a target, user-specified scenario that the decision-maker would like to explore and plan for. We propose and compare two generic approaches to generating synthetic populations that produce target scenarios, and demonstrate through simulation studies that these approaches are able to automatically generate synthetic populations whose behaviours match the target scenario, thereby facilitating simulation-based planning under uncertainty
    corecore