2,286 research outputs found

    Deep reinforcement learning for investing: A quantamental approach for portfolio management

    Get PDF
    The world of investments affects us all. The way surplus capital is allocated by ourselves or investment funds can determine how we eat, innovate and even educate kids. Portfolio management is an integral albeit challenging process in this task (Leković, 2021). It entails managing a basket of financial assets to maximize the returns per unit of risk, considering all the micro and macro economical, societal, political and environmental complex causal relations. This study aims to evaluate how a machine learning technique called deep reinforcement learning (DRL) can improve the activity of portfolio management. It also has a second goal of understanding if financial fundamental features (i.e., revenue, debt, assets, cash flow) improve the model performance. After conducting a literature review to establish the current state-of-the-art, the CRISP-DM method was followed: 1) Business understanding; 2) Data understanding; 3) Data preparation – two datasets were prepared, one with market only features (i.e., close price, daily volume traded) and another with market plus fundamental features; 4) Modeling – Advantage Actor-Critic (A2C), Deep Deterministic Policy Gradient (DDPG) and Twin-delayed DDPG (TD3) DRL models were optimized on both datasets; 5) Evaluation. On average, models had the same sharpe ratio performance in both datasets – average sharpe ratio of 0.35 vs 0.30 for the baseline, in the test set. DRL models outperformed traditional portfolio optimization techniques and financial fundamental features improved model robustness and consistency. Hence, supporting the use of both DRL models and quantamental investment strategies in portfolio management.Todos somos afetados pelo mundo dos investimentos. A forma como o excedente de capital é alocado tanto por nós como por fundos de investimentos determina a forma como comemos, inovamos e até mesmo como fornecemos educação às crianças. Gestão de portfólio é uma tarefa essencial e desafiadora neste processo (Leković, 2021). Envolve gerir um conjunto de ativos financeiros com o objetivo de maximizar os retornos por unidade de risco, tendo em consideração todas as relações complexas entre fatores macro e microeconómicos, sociais, políticos e ambientais. Este estudo pretende avaliar de que forma a técnica de machine learning intitulada de Aprendizagem por Reforço Profunda (ARP) consegue melhorar a tarefa de gestão de portfólios. Também tem um segundo objetivo de entender se variáveis relacionadas com a performance financeira de uma empresa (i.e., vendas, passivos, ativos, fluxos de caixa) melhoram a performance do modelo. Após o estado-de-arte ter sido definido com a revisão de literatura, utilizou-se o método CRISP-DM da seguinte forma: 1) Entendimento do negócio; 2) Entendimento dos dados; 3) Preparação dos dados – dois conjuntos de dados foram preparados, um apenas com variáveis de mercado (i.e., preço de fecho, volume transacionado) e o outro com variáveis de mercado mais variáveis de performance financeira; 4) Modelagem – usou-se os modelos Advantage Actor-Critic (A2C), Deep Deterministic Policy Gradient (DDPG) e Twin-delayed DDPG (TD3) em ambos os conjuntos de dados; 5) Avaliação. Em média, os modelos apresentaram o mesmo índice sharpe nos dois conjuntos de dados – média de 0.35 vs 0.30 para o modelo base, no conjunto de teste. Os modelos ARP apresentaram uma melhor performance do que os modelos tradicionais de otimização de portfólios e a utilização de variáveis de performance financeira melhoraram a robustez e consistência dos modelos. Tais conclusões suportam o uso de modelos ARP e de estratégias de investimentos quantamentais na gestão de portfólios

    Predicting the Price of Cryptocurrency Using Machine Learning Algorithm

    Get PDF
    It is proposed to conduct a project aimed at forecasting cryptocurrency price values. The concept of cryptocurrencies refers to computerized money that is used for a variety of transactions as well as for long-term investments. The most common cryptocurrency that most of the systems use to conduct their transactions is the Ethereum cryptocurrency. However, it needs to be noted that there are many other well-known crypto currencies other than ethereum as well. We propose to use Machine Learning for this project, which will be trained from the available cryptocurrency price data, to gain intelligence, and then use this knowledge to make accurate predictions. Trading cryptocurrency prices is one of the most popular exchanges right now. It is suggested that both day traders and investors can benefit greatly from using the suggested approach

    Applications of machine learning in finance: analysis of international portfolio flows using regime-switching models

    Get PDF
    Recent advances in machine learning are finding commercial applications across many sectors, not least the financial industry. This thesis explores applications of machine learning in quantitative finance through two approaches. The current state of the art is evaluated through an extensive review of recent quantitative finance literature. Themes and technologies are identified and classified, and the key use cases highlighted from the emerging literature. Machine learning is found to enable deeper analysis of financial data and the modelling of complex nonlinear relationships within data. The ability to incorporate alternative data in the investment process is also enabled. Innovations in backtesting and performance metrics are also made possible through the application of machine learning. Demonstrating a practical application of machine learning in quantitative finance, regime-switching models are applied to analyse and extract information from international portfolio flows. Regime-switching models capture properties of international portfolio flows previously found in the literature, such as persistence in flows compared to returns, and a relationship between flows and returns. Structural breaks and persistent regime shifts in investor behaviour are identified by the models. Regime-switching models infer regimes in the data which exhibit unique characteristic flows and returns. To determine whether the information extracted could aid in the investment process, a portfolio of global assets was constructed, with positions determined using a flowbased regime-switching model. The portfolio outperforms two benchmarks, a buy & hold strategy and the MSCI World Index in walk-forward out-of-sample tests using daily and weekly data

    Active Learning for Reducing Labeling Effort in Text Classification Tasks

    Get PDF
    Labeling data can be an expensive task as it is usually performed manually by domain experts. This is cumbersome for deep learning, as it is dependent on large labeled datasets. Active learning (AL) is a paradigm that aims to reduce labeling effort by only using the data which the used model deems most informative. Little research has been done on AL in a text classification setting and next to none has involved the more recent, state-of-the-art Natural Language Processing (NLP) models. Here, we present an empirical study that compares different uncertainty-based algorithms with BERTbase_{base} as the used classifier. We evaluate the algorithms on two NLP classification datasets: Stanford Sentiment Treebank and KvK-Frontpages. Additionally, we explore heuristics that aim to solve presupposed problems of uncertainty-based AL; namely, that it is unscalable and that it is prone to selecting outliers. Furthermore, we explore the influence of the query-pool size on the performance of AL. Whereas it was found that the proposed heuristics for AL did not improve performance of AL; our results show that using uncertainty-based AL with BERTbase_{base} outperforms random sampling of data. This difference in performance can decrease as the query-pool size gets larger.Comment: Accepted as a conference paper at the joint 33rd Benelux Conference on Artificial Intelligence and the 30th Belgian Dutch Conference on Machine Learning (BNAIC/BENELEARN 2021). This camera-ready version submitted to BNAIC/BENELEARN, adds several improvements including a more thorough discussion of related work plus an extended discussion section. 28 pages including references and appendice

    Affinity-Based Reinforcement Learning : A New Paradigm for Agent Interpretability

    Get PDF
    The steady increase in complexity of reinforcement learning (RL) algorithms is accompanied by a corresponding increase in opacity that obfuscates insights into their devised strategies. Methods in explainable artificial intelligence seek to mitigate this opacity by either creating transparent algorithms or extracting explanations post hoc. A third category exists that allows the developer to affect what agents learn: constrained RL has been used in safety-critical applications and prohibits agents from visiting certain states; preference-based RL agents have been used in robotics applications and learn state-action preferences instead of traditional reward functions. We propose a new affinity-based RL paradigm in which agents learn strategies that are partially decoupled from reward functions. Unlike entropy regularisation, we regularise the objective function with a distinct action distribution that represents a desired behaviour; we encourage the agent to act according to a prior while learning to maximise rewards. The result is an inherently interpretable agent that solves problems with an intrinsic affinity for certain actions. We demonstrate the utility of our method in a financial application: we learn continuous time-variant compositions of prototypical policies, each interpretable by its action affinities, that are globally interpretable according to customers’ financial personalities. Our method combines advantages from both constrained RL and preferencebased RL: it retains the reward function but generalises the policy to match a defined behaviour, thus avoiding problems such as reward shaping and hacking. Unlike Boolean task composition, our method is a fuzzy superposition of different prototypical strategies to arrive at a more complex, yet interpretable, strategy.publishedVersio
    • …
    corecore