1,234 research outputs found

    Popular Ensemble Methods: An Empirical Study

    Full text link
    An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier -- especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble's performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees

    Volatility forecasting with garch models and recurrent neural networks

    Get PDF
    The three main ways to estimate future volatilities include the implied volatility of option prices, time-series volatility models, and neural network models. This project investigates whether there are economically meaningful differences between those approaches. Seminal time-series models like the GARCH, as well as recurrent neural network models like the LSTM are investigated to forecast volatilities. An eventual informational advantage over the market’s expectation of future volatility in the form of implied volatility is sought after. Through trading strategies involving options, as well as investment vehicles that emulate the VIX, it is attempted to trade volatility in a profitable way

    Value and momentum recently: analysis of quantitative investment strategy

    Get PDF
    Both momentum and value strategies earn consistent and significant premia and are negatively correlated, with their equal weight combination improving the risk-return trade-off. This paper shows that allocation based on market volatility further improves the risk-return trade-off, particularly by limiting the large drawdowns momentum experiences in market crashes, where value tends to perform better. Both long-short strategy legs achieve comparably low Sharpe ratios in the past 20 years. There is no clear picture of high momentum stocks performing better than their low momentum counterparts, similar for value, which seems to off-set the long-short returns, while the long legs perform comparably well. The group report tests the combination of five different sub strategies, resembling the performance of a multi-strategy hedge fund benchmarked against the popular buy-and-hold S&P 500 investing approach. The sub-strategies are: residual momentum, value including intangibles, value and momentum, volatility forecasting, and a long short-term memory strategy, the latter two being machine-learning-based, and all investing in the U.S. universe. The combined strategy’s performance is analyzed by three weighting schemes: equal-weight, momentum, and mean variance, resulting in a gamut of robustness and performance. The combined strategies reap diversification benefits, thereby giving investors a superior risk-reward trade-off compared to the buy-and hold S&P 500 approac

    Residual momentum

    Get PDF
    Previous research on residual momentum indicated that it performs well compared to total momentum strategies as well as during market turmoil. This paper analyzed the performance of a residual momentum strategy based on two different factor models and multiple weighting schemes. Evidence is found of a volatility weighted residual momentum strategy outperforming the S&P 500 Index over a time span of 20 years and generating a statistically significant alpha. It failed, however, to outperform the S&P 500 Index in the out-of-sample period that ranges from 2011 to .2021 and thus opens the door to further enhance the signa

    Cryptocurrency trading as a Markov Decision Process

    Get PDF
    A gestão de portefólio é um problema em que, em vez de olhar para ativos únicos, o objetivo é olhar para um portefólio ou um conjunto de ativos como um todo. O objetivo é ter o melhor portefólio, a cada momento, enquanto tenta maximizar os lucros no final de uma sessão de trading. Esta tese aborda esta problemática, empregando algoritmos de Deep Reinforcement Learning, num ambiente que simula uma sessão de trading. É também apresentada a implementação desta metodologia proposta, aplicada a 11 criptomoedas e cinco algoritmos DRL. Foram avaliados três tipos de condições de mercado: tendência de alta, tendência de baixa e lateralização. Cada condição de mercado em cada algoritmo foi avaliada, usando três funções de recompensa diferentes, no ambiente de negociação, e todos os diferentes cenários foram testados contra as estratégias de gestão de portefólio clássicas, como seguir o vencedor, seguir o perdedor e portefólios igualmente distribuídos. Assim, esta estratégia foi o benchmark mais performativo e os modelos que produziram os melhores resultados tiveram uma abordagem semelhante, diversificar e segurar. Deep Deterministic Policy Gradient apresentou-se como o algoritmo mais estável, junto com seu algoritmo de extensão, Twin Delayed Deep Deterministic Policy Gradient. Proximal Policy Optimization foi o único algoritmo que não conseguiu produzir resultados decentes ao comparar com as estratégias de benchmark e outros algoritmos de Deep Reinforcement Learning.The problem with portfolio management is that, instead of looking at single assets, the goal is to look at a portfolio or a set of assets as a whole. The objective is to have the best portfolio at each given time while trying to maximize profits at the end of a trading session. This thesis addresses this issue by employing the Deep Reinforcement Learning algorithms in a cryptocurrency trading environment which simulates a trading session. It is also presented the implementation of this proposed methodology applied to 11 cryptocurrencies and five Deep Reinforcement Learning algorithms. Three types of market conditions were evaluated namely, up trending or bullish, down trending or bearish, and lateralization or sideways. Each market condition in each algorithm was evaluated using three different reward functions in the trading environment and all different scenarios were back tested against old school portfolio management strategies such as following-the-winner, following-the-loser, and equally weighted portfolios. The results seem to indicate that an equally-weighted portfolio is an hard to beat strategy in all market conditions. This strategy was the most performative benchmark and the models that produced the best results had a similar approach, diversify and hold. Deep Deterministic Policy Gradient presented itself to be the most stable algorithm along with its extension algorithm, Twin Delayed Deep Deterministic Policy Gradient. Proximal Policy Optimization was the only algorithm that could not produce decent results when comparing with the benchmark strategies and other Deep Reinforcement Learning algorithms

    Issues in the Credit Risk Modeling of Retail Markets

    Get PDF
    Retail loan markets create special challenges for credit risk assessment. Borrowers tend to be informationally opaque and borrow relatively infrequently. Retail loans are illiquid and do not trade in secondary markets. For these reasons, historical credit databases are usually not available for retail loans. Moreover, even when data are available, retail loan values are small in absolute terms and therefore application of sophisticated modeling is usually not cost effective on an individual loan-by-loan basis. These features of retail lending have led to the development of techniques that rely on portfolio aggregation in order to measure retail credit risk exposure. BIS proposals for the Basel New Capital Accord differentiate portfolios of mortgage loans from revolving credit loan portfolios from other retail loan portfolios in assessing the bank’s minimum capital requirement. We survey the most recent BIS proposals for the credit risk measurement of retail credits in capital regulations. We also describe the recent trend away from relationship lending toward transactional lending, even in the small business loan arena traditionally characterized by small banks extending relationship loans to small businesses. These trends create the opportunity to adopt more analytical, data-based approaches to credit risk measurement. We survey proprietary credit scoring models (such as Fair, Isaac and SMEloan), as well as options-theoretic structural models (such as KMV and Moody’s RiskCalc) and reduced form models (such as Credit Risk Plus)

    Deep reinforcement learning and signal processing applications for investment strategies

    Get PDF
    En este proyecto miramos los fundamentos de las Finanzas, del Deep Reinforcement Larning, y del procesado de señal para construir una estrategia de inversión en bolsa. El estudio parte de un desarrollo hecho por otra universidad en la que se estudia una técnica de ensamblado usando tres algoritmos de DRL (A2C, DDPG y PPO). En nuestro caso el objetivo reside en mejorar los muy prometedores y ambiciosos resultados obtenidos por el anterior académico. Se proponen tres posibles mejoras, como cambiar la función de Reward de los Agentes de DRL al Sharpe Ratio, se propone ensanchar la base de datos a otra que contenga un universo de activos financieros más amplio y por último se propone realizar una estrategia de combinado para los tres algoritmos, usando técnicas de procesado de señal. Tras explorar las dificultades técnicas y proponer soluciones formales, se demuestra en los tres casos que se consigue mejorar el rendimiento de la estrategia original y se compara en todo momento con los anteriores resultados.In this project we look at the fundamentals of Finance, Deep Reinforcement Learning and signal Processing in order to develop an investment strategy for the stock market. The study parts from a development made by another university in which an Ensemble technique is designed using three different DRL algorithms (A2C, DDPG and PPO) In our case, the objective is to improve the very promising results obtained by the author. Three possible improvements are proposed, such as using the Differential Sharpe Ratio as a reward function, expanding the database to another containing a broader universe of financial assets and finally it is proposed to carry out a combination strategy with all three algorithms using signal processing techniques. After exploring the technical difficulties and proposing formal solutions, it is demonstrated that in all three cases performance is improved and results are compared to the previous ones.En aquest projecte fem una ullada als fonaments de les Finances, del Reinforcement Learning i del Processat de Senyal per a desenvolupar una estratègia d'inversió als mercats financers. L'estudi parteix d'un desenvolupament ja fet per una altra universitat en el que es dissenya una tècnica d'acoblament amb tres algorismes de DRL (A2C,PPO i DDPG) En el nostre cas, l'objectiu es millorar els prometedors resultats obtinguts per l'autor. Tres possibles solucions s'expliquen al projecte, la primera fer servir el Sharpe Ratio Diferencial com a funció de reward dels Agents DRL. La segona fer servir una base de dades més gran amb un univers d'accions més extens. Per últim es proposa una tècnica de combinació dels algorismes de DRL basada en processat de senyal. Després de discutir les dificultats tècniques i de fer les propostes formals es demostra en els tres casos la millora de rendiment i es comparen els resultats amb la estratègia original
    • …
    corecore