2,767 research outputs found

    An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading

    Full text link
    We propose an ensemble method to improve the generalization performance of trading strategies trained by deep reinforcement learning algorithms in a highly stochastic environment of intraday cryptocurrency portfolio trading. We adopt a model selection method that evaluates on multiple validation periods, and propose a novel mixture distribution policy to effectively ensemble the selected models. We provide a distributional view of the out-of-sample performance on granular test periods to demonstrate the robustness of the strategies in evolving market conditions, and retrain the models periodically to address non-stationarity of financial data. Our proposed ensemble method improves the out-of-sample performance compared with the benchmarks of a deep reinforcement learning strategy and a passive investment strategy

    Algorithms in future capital markets: A survey on AI, ML and associated algorithms in capital markets

    Get PDF
    This paper reviews Artificial Intelligence (AI), Machine Learning (ML) and associated algorithms in future Capital Markets. New AI algorithms are constantly emerging, with each 'strain' mimicking a new form of human learning, reasoning, knowledge, and decisionmaking. The current main disrupting forms of learning include Deep Learning, Adversarial Learning, Transfer and Meta Learning. Albeit these modes of learning have been in the AI/ML field more than a decade, they now are more applicable due to the availability of data, computing power and infrastructure. These forms of learning have produced new models (e.g., Long Short-Term Memory, Generative Adversarial Networks) and leverage important applications (e.g., Natural Language Processing, Adversarial Examples, Deep Fakes, etc.). These new models and applications will drive changes in future Capital Markets, so it is important to understand their computational strengths and weaknesses. Since ML algorithms effectively self-program and evolve dynamically, financial institutions and regulators are becoming increasingly concerned with ensuring there remains a modicum of human control, focusing on Algorithmic Interpretability/Explainability, Robustness and Legality. For example, the concern is that, in the future, an ecology of trading algorithms across different institutions may 'conspire' and become unintentionally fraudulent (cf. LIBOR) or subject to subversion through compromised datasets (e.g. Microsoft Tay). New and unique forms of systemic risks can emerge, potentially coming from excessive algorithmic complexity. The contribution of this paper is to review AI, ML and associated algorithms, their computational strengths and weaknesses, and discuss their future impact on the Capital Markets

    Analysis of cryptocurrency markets from 2016 to 2019

    Get PDF
    This thesis explores machine learning techniques in algorithmic trading. We implement a trading computer program that balances a portfolio of cryptocurrencies. We try to outperform an equally weighted strategy. As our machine learning technique, we use deep reinforcement learning. Cryptocurrencies are digital mediums of exchange that use cryptography to secure transactions. The most well-known example is Bitcoin. They are interesting to analyze due to high volatility and lack of previous research. The availability of data is also exceptional. We introduce an algorithmic trading agent – a computer program powered by machine learning. The agent follows some pre-determined instructions and executes market orders. Traditionally a human trader determines these instructions by using some technical indicators. We instead give the trading agent raw price data as input and let it figure out its instructions. The agent uses machine learning to figure out the trading rules. We evaluate the performance of the agent in seven different backtest stories. Each backtest story reflects some unique and remarkable period in cryptocurrency history. One backtest period was from December 2017 when Bitcoin reached its all-time high price. Another one is from April 2017 when Bitcoin almost lost its place as the most valued cryptocurrency. The stories show the market conditions where the agent excels and reveals its risks. The algorithmic trading agent has two goals. First, it chooses initial weights, and then it rebalances these weights periodically. Choosing proper initial weights is crucial since transaction costs make trade action costly. We evaluate the trading agent’s performance in these two tasks by using two agents: a static and a dynamic agent. The static agent only does the weight initialization and does not rebalance. The dynamic agent also rebalances. We find that the agent does a poor job in choosing initial weights. We also want to find out the optimal time-period for rebalancing for the dynamic agent. Therefore, we compare rebalancing periods from 15 minutes to 1 day. To make our results robust, we ran over a thousand simulations. We found that 15 – 30 minutes rebalancing periods tend to work the best. We find that the algorithmic trading agent closely follows an equally weighted strategy. This finding suggests that the agent is unavailable to decipher meaningful signals from the noisy price data. The machine learning approach does not provide an advantage over equally weighted strategy. Nevertheless, the trading agent excels in volatile and mean reverting market conditions. On these periods, the dynamic agent has lower volatility and a higher Sharpe ratio. However, it has a dangerous tendency of following the looser. Our results contribute to the field of algorithmic finance. We show that frequent rebalancing is a useful tool in the risk management of highly volatile asset classes. Further investigation is required to extend these findings beyond cryptocurrencies

    FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning

    Full text link
    Finance is a particularly difficult playground for deep reinforcement learning. However, establishing high-quality market environments and benchmarks for financial reinforcement learning is challenging due to three major factors, namely, low signal-to-noise ratio of financial data, survivorship bias of historical data, and model overfitting in the backtesting stage. In this paper, we present an openly accessible FinRL-Meta library that has been actively maintained by the AI4Finance community. First, following a DataOps paradigm, we will provide hundreds of market environments through an automatic pipeline that collects dynamic datasets from real-world markets and processes them into gym-style market environments. Second, we reproduce popular papers as stepping stones for users to design new trading strategies. We also deploy the library on cloud platforms so that users can visualize their own results and assess the relative performance via community-wise competitions. Third, FinRL-Meta provides tens of Jupyter/Python demos organized into a curriculum and a documentation website to serve the rapidly growing community. FinRL-Meta is available at: https://github.com/AI4Finance-Foundation/FinRL-MetaComment: NeurIPS 2022 Datasets and Benchmarks. 36th Conference on Neural Information Processing Systems Datasets and Benchmarks Trac

    A Worldwide Assessment of Quantitative Finance Research through Bibliometric Analysis

    Get PDF
    The field of quantitative finance has been rapidly growing in both academics and practice. This article applies bibliometric analysis to investigate the current state of quantitative finance research. A comprehensive dataset of 2,723 publications from the Web of Science Core Collection database, between 1992 to 2022, is collected and analyzed. CiteSpace and VOSViewer are adopted to visualize the bibliometric analysis. The article identifies the most relevant research in quantitative finance according to journals, articles, research areas, authors, institutions, and countries. The study further identifies emerging research topics in quantitative finance, e.g. deep learning, neural networks, quantitative trading, and reinforcement learning. This article contributes to the literature by providing a systematic overview of the developments, trajectories, objectives, and potential future research topics in the field of quantitative finance

    Learning to Manipulate a Financial Benchmark

    Get PDF
    Financial benchmarks estimate market values or reference rates used in a wide variety of contexts, but are often calculated from data generated by parties who have incentives to manipulate these benchmarks. Since the London Interbank Offered Rate (LIBOR) scandal in 2011, market participants, scholars, and regulators have scrutinized financial benchmarks and the ability of traders to manipulate them. We study the impact on market welfare of manipulating transaction-based benchmarks in a simulated market environment. Our market consists of a single benchmark manipulator with external holdings dependent on the benchmark, and numerous background traders unaffected by the benchmark. We explore two types of manipulative trading strategies: zero-intelligence strategies and strategies generated by deep reinforcement learning. Background traders use zero-intelligence trading strategies. We find that the total surplus of all market participants who are trading increases with manipulation. However, the aggregated market surplus decreases for all trading agents, and the market surplus of the manipulator decreases, so the manipulator’s surplus from the benchmark significantly increases. This entails under natural assumptions that the market and any third parties invested in the opposite side of the benchmark from the manipulator are negatively impacted by this manipulation
    • …
    corecore