467 research outputs found
An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading
We propose an ensemble method to improve the generalization performance of
trading strategies trained by deep reinforcement learning algorithms in a
highly stochastic environment of intraday cryptocurrency portfolio trading. We
adopt a model selection method that evaluates on multiple validation periods,
and propose a novel mixture distribution policy to effectively ensemble the
selected models. We provide a distributional view of the out-of-sample
performance on granular test periods to demonstrate the robustness of the
strategies in evolving market conditions, and retrain the models periodically
to address non-stationarity of financial data. Our proposed ensemble method
improves the out-of-sample performance compared with the benchmarks of a deep
reinforcement learning strategy and a passive investment strategy
Predictive Crypto-Asset Automated Market Making Architecture for Decentralized Finance using Deep Reinforcement Learning
The study proposes a quote-driven predictive automated market maker (AMM)
platform with on-chain custody and settlement functions, alongside off-chain
predictive reinforcement learning capabilities to improve liquidity provision
of real-world AMMs. The proposed AMM architecture is an augmentation to the
Uniswap V3, a cryptocurrency AMM protocol, by utilizing a novel market
equilibrium pricing for reduced divergence and slippage loss. Further, the
proposed architecture involves a predictive AMM capability, utilizing a deep
hybrid Long Short-Term Memory (LSTM) and Q-learning reinforcement learning
framework that looks to improve market efficiency through better forecasts of
liquidity concentration ranges, so liquidity starts moving to expected
concentration ranges, prior to asset price movement, so that liquidity
utilization is improved. The augmented protocol framework is expected have
practical real-world implications, by (i) reducing divergence loss for
liquidity providers, (ii) reducing slippage for crypto-asset traders, while
(iii) improving capital efficiency for liquidity provision for the AMM
protocol. To our best knowledge, there are no known protocol or literature that
are proposing similar deep learning-augmented AMM that achieves similar capital
efficiency and loss minimization objectives for practical real-world
applications.Comment: 20 pages, 6 figures, 1 algorith
A deep Q-learning portfolio management framework for the cryptocurrency market
AbstractDeep reinforcement learning is gaining popularity in many different fields. An interesting sector is related to the definition of dynamic decision-making systems. A possible example is dynamic portfolio optimization, where an agent has to continuously reallocate an amount of fund into a number of different financial assets with the final goal of maximizing return and minimizing risk. In this work, a novel deep Q-learning portfolio management framework is proposed. The framework is composed by two elements: a set of local agents that learn assets behaviours and a global agent that describes the global reward function. The framework is tested on a crypto portfolio composed by four cryptocurrencies. Based on our results, the deep reinforcement portfolio management framework has proven to be a promising approach for dynamic portfolio optimization
Dynamic Datasets and Market Environments for Financial Reinforcement Learning
The financial market is a particularly challenging playground for deep
reinforcement learning due to its unique feature of dynamic datasets. Building
high-quality market environments for training financial reinforcement learning
(FinRL) agents is difficult due to major factors such as the low
signal-to-noise ratio of financial data, survivorship bias of historical data,
and model overfitting. In this paper, we present FinRL-Meta, a data-centric and
openly accessible library that processes dynamic datasets from real-world
markets into gym-style market environments and has been actively maintained by
the AI4Finance community. First, following a DataOps paradigm, we provide
hundreds of market environments through an automatic data curation pipeline.
Second, we provide homegrown examples and reproduce popular research papers as
stepping stones for users to design new trading strategies. We also deploy the
library on cloud platforms so that users can visualize their own results and
assess the relative performance via community-wise competitions. Third, we
provide dozens of Jupyter/Python demos organized into a curriculum and a
documentation website to serve the rapidly growing community. The open-source
codes for the data curation pipeline are available at
https://github.com/AI4Finance-Foundation/FinRL-MetaComment: 49 pages, 15 figures. arXiv admin note: substantial text overlap with
arXiv:2211.0310
FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning
Finance is a particularly difficult playground for deep reinforcement
learning. However, establishing high-quality market environments and benchmarks
for financial reinforcement learning is challenging due to three major factors,
namely, low signal-to-noise ratio of financial data, survivorship bias of
historical data, and model overfitting in the backtesting stage. In this paper,
we present an openly accessible FinRL-Meta library that has been actively
maintained by the AI4Finance community. First, following a DataOps paradigm, we
will provide hundreds of market environments through an automatic pipeline that
collects dynamic datasets from real-world markets and processes them into
gym-style market environments. Second, we reproduce popular papers as stepping
stones for users to design new trading strategies. We also deploy the library
on cloud platforms so that users can visualize their own results and assess the
relative performance via community-wise competitions. Third, FinRL-Meta
provides tens of Jupyter/Python demos organized into a curriculum and a
documentation website to serve the rapidly growing community. FinRL-Meta is
available at: https://github.com/AI4Finance-Foundation/FinRL-MetaComment: NeurIPS 2022 Datasets and Benchmarks. 36th Conference on Neural
Information Processing Systems Datasets and Benchmarks Trac
- …