6 research outputs found

    Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States

    Full text link
    Portfolio management (PM) is a fundamental financial planning task that aims to achieve investment goals such as maximal profits or minimal risks. Its decision process involves continuous derivation of valuable information from various data sources and sequential decision optimization, which is a prospective research direction for reinforcement learning (RL). In this paper, we propose SARL, a novel State-Augmented RL framework for PM. Our framework aims to address two unique challenges in financial PM: (1) data heterogeneity -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary. To incorporate heterogeneous data and enhance robustness against environment uncertainty, our SARL augments the asset information with their price movement prediction as additional states, where the prediction can be solely based on financial data (e.g., asset prices) or derived from alternative sources such as news. Experiments on two real-world datasets, (i) Bitcoin market and (ii) HighTech stock market with 7-year Reuters news articles, validate the effectiveness of SARL over existing PM approaches, both in terms of accumulated profits and risk-adjusted profits. Moreover, extensive simulations are conducted to demonstrate the importance of our proposed state augmentation, providing new insights and boosting performance significantly over standard RL-based PM method and other baselines.Comment: AAAI 202

    Fed+: A Unified Approach to Robust Personalized Federated Learning

    Full text link
    We present a class of methods for robust, personalized federated learning, called Fed+, that unifies many federated learning algorithms. The principal advantage of this class of methods is to better accommodate the real-world characteristics found in federated training, such as the lack of IID data across parties, the need for robustness to outliers or stragglers, and the requirement to perform well on party-specific datasets. We achieve this through a problem formulation that allows the central server to employ robust ways of aggregating the local models while keeping the structure of local computation intact. Without making any statistical assumption on the degree of heterogeneity of local data across parties, we provide convergence guarantees for Fed+ for convex and non-convex loss functions and robust aggregation. The Fed+ theory is also equipped to handle heterogeneous computing environments including stragglers without additional assumptions; specifically, the convergence results cover the general setting where the number of local update steps across parties can vary. We demonstrate the benefits of Fed+ through extensive experiments across standard benchmark datasets as well as on a challenging real-world problem in financial portfolio management where the heterogeneity of party-level data can lead to training failure in standard federated learning approaches

    Cryptocurrency trading as a Markov Decision Process

    Get PDF
    A gestão de portefólio é um problema em que, em vez de olhar para ativos únicos, o objetivo é olhar para um portefólio ou um conjunto de ativos como um todo. O objetivo é ter o melhor portefólio, a cada momento, enquanto tenta maximizar os lucros no final de uma sessão de trading. Esta tese aborda esta problemática, empregando algoritmos de Deep Reinforcement Learning, num ambiente que simula uma sessão de trading. É também apresentada a implementação desta metodologia proposta, aplicada a 11 criptomoedas e cinco algoritmos DRL. Foram avaliados três tipos de condições de mercado: tendência de alta, tendência de baixa e lateralização. Cada condição de mercado em cada algoritmo foi avaliada, usando três funções de recompensa diferentes, no ambiente de negociação, e todos os diferentes cenários foram testados contra as estratégias de gestão de portefólio clássicas, como seguir o vencedor, seguir o perdedor e portefólios igualmente distribuídos. Assim, esta estratégia foi o benchmark mais performativo e os modelos que produziram os melhores resultados tiveram uma abordagem semelhante, diversificar e segurar. Deep Deterministic Policy Gradient apresentou-se como o algoritmo mais estável, junto com seu algoritmo de extensão, Twin Delayed Deep Deterministic Policy Gradient. Proximal Policy Optimization foi o único algoritmo que não conseguiu produzir resultados decentes ao comparar com as estratégias de benchmark e outros algoritmos de Deep Reinforcement Learning.The problem with portfolio management is that, instead of looking at single assets, the goal is to look at a portfolio or a set of assets as a whole. The objective is to have the best portfolio at each given time while trying to maximize profits at the end of a trading session. This thesis addresses this issue by employing the Deep Reinforcement Learning algorithms in a cryptocurrency trading environment which simulates a trading session. It is also presented the implementation of this proposed methodology applied to 11 cryptocurrencies and five Deep Reinforcement Learning algorithms. Three types of market conditions were evaluated namely, up trending or bullish, down trending or bearish, and lateralization or sideways. Each market condition in each algorithm was evaluated using three different reward functions in the trading environment and all different scenarios were back tested against old school portfolio management strategies such as following-the-winner, following-the-loser, and equally weighted portfolios. The results seem to indicate that an equally-weighted portfolio is an hard to beat strategy in all market conditions. This strategy was the most performative benchmark and the models that produced the best results had a similar approach, diversify and hold. Deep Deterministic Policy Gradient presented itself to be the most stable algorithm along with its extension algorithm, Twin Delayed Deep Deterministic Policy Gradient. Proximal Policy Optimization was the only algorithm that could not produce decent results when comparing with the benchmark strategies and other Deep Reinforcement Learning algorithms

    Reinforcement-Learning Based Portfolio Management with Augmented Asset Movement Prediction States

    No full text
    corecore