2,478 research outputs found
Ensemble Committees for Stock Return Classification and Prediction
This paper considers a portfolio trading strategy formulated by algorithms in
the field of machine learning. The profitability of the strategy is measured by
the algorithm's capability to consistently and accurately identify stock
indices with positive or negative returns, and to generate a preferred
portfolio allocation on the basis of a learned model. Stocks are characterized
by time series data sets consisting of technical variables that reflect market
conditions in a previous time interval, which are utilized produce binary
classification decisions in subsequent intervals. The learned model is
constructed as a committee of random forest classifiers, a non-linear support
vector machine classifier, a relevance vector machine classifier, and a
constituent ensemble of k-nearest neighbors classifiers. The Global Industry
Classification Standard (GICS) is used to explore the ensemble model's efficacy
within the context of various fields of investment including Energy, Materials,
Financials, and Information Technology. Data from 2006 to 2012, inclusive, are
considered, which are chosen for providing a range of market circumstances for
evaluating the model. The model is observed to achieve an accuracy of
approximately 70% when predicting stock price returns three months in advance.Comment: 15 pages, 4 figures, Neukom Institute Computational Undergraduate
Research prize - second plac
Estimation of Default Probabilities with Support Vector Machines
Predicting default probabilities is important for firms and banks to operate successfully and to estimate their specific risks. There are many reasons to use nonlinear techniques for predicting bankruptcy from financial ratios. Here we propose the so called Support Vector Machine (SVM) to estimate default probabilities of German firms. Our analysis is based on the Creditreform database. The results reveal that the most important eight predictors related to bankruptcy for these German firms belong to the ratios of activity, profitability, liquidity, leverage and the percentage of incremental inventories. Based on the performance measures, the SVM tool can predict a firms default risk and identify the insolvent firm more accurately than the benchmark logit model. The sensitivity investigation and a corresponding visualization tool reveal that the classifying ability of SVM appears to be superior over a wide range of the SVM parameters. Based on the nonparametric Nadaraya-Watson estimator, the expected returns predicted by the SVM for regression have a significant positive linear relationship with the risk scores obtained for classification. This evidence is stronger than empirical results for the CAPM based on a linear regression and confirms that higher risks need to be compensated by higher potential returns.Support Vector Machine, Bankruptcy, Default Probabilities Prediction, Expected Profitability, CAPM.
A holistic auto-configurable ensemble machine learning strategy for financial trading
Financial markets forecasting represents a challenging task for a series of reasons, such as the irregularity, high fluctuation, noise of the involved data, and the peculiar high unpredictability of the financial domain. Moreover, literature does not offer a proper methodology to systematically identify intrinsic and hyper-parameters, input features, and base algorithms of a forecasting strategy in order to automatically adapt itself to the chosen market. To tackle these issues, this paper introduces a fully automated optimized ensemble approach, where an optimized feature selection process has been combined with an automatic ensemble machine learning strategy, created by a set of classifiers with intrinsic and hyper-parameters learned in each marked under consideration. A series of experiments performed on different real-world futures markets demonstrate the effectiveness of such an approach with regard to both to the Buy and Hold baseline strategy and to several canonical state-of-the-art solutions
A Comprehensive Survey on Enterprise Financial Risk Analysis: Problems, Methods, Spotlights and Applications
Enterprise financial risk analysis aims at predicting the enterprises' future
financial risk.Due to the wide application, enterprise financial risk analysis
has always been a core research issue in finance. Although there are already
some valuable and impressive surveys on risk management, these surveys
introduce approaches in a relatively isolated way and lack the recent advances
in enterprise financial risk analysis. Due to the rapid expansion of the
enterprise financial risk analysis, especially from the computer science and
big data perspective, it is both necessary and challenging to comprehensively
review the relevant studies. This survey attempts to connect and systematize
the existing enterprise financial risk researches, as well as to summarize and
interpret the mechanisms and the strategies of enterprise financial risk
analysis in a comprehensive way, which may help readers have a better
understanding of the current research status and ideas. This paper provides a
systematic literature review of over 300 articles published on enterprise risk
analysis modelling over a 50-year period, 1968 to 2022. We first introduce the
formal definition of enterprise risk as well as the related concepts. Then, we
categorized the representative works in terms of risk type and summarized the
three aspects of risk analysis. Finally, we compared the analysis methods used
to model the enterprise financial risk. Our goal is to clarify current
cutting-edge research and its possible future directions to model enterprise
risk, aiming to fully understand the mechanisms of enterprise risk
communication and influence and its application on corporate governance,
financial institution and government regulation
Robust asset allocation under model ambiguity
A decision maker, when facing a decision problem, often considers
several models to represent the outcomes of the decision variable considered.
More often than not, the decision maker does not trust fully
any of those models and hence displays ambiguity or model uncertainty
aversion.
In this PhD thesis, focus is given to the specific case of asset allocation
problem under ambiguity faced by financial investors. The aim is not
to find an optimal solution for the investor, but rather come up with
a general methodology that can be applied in particular to the asset
allocation problem and allows the investor to find a tractable, easy to
compute solution for this problem, taking into account ambiguity.
This PhD thesis is structured as follows: First, some classical and
widely used models to represent asset returns are presented. It is
shown that the performance of the asset portfolios built using those
single models is very volatile. No model performs better than the
others consistently over the period considered, which gives empirical
evidence that: no model can be fully trusted over the long run and
that several models are needed to achieve the best asset allocation
possible. Therefore, the classical portfolio theory must be adapted
to take into account ambiguity or model uncertainty. Many authors
have in an early stage attempted to include ambiguity aversion in
the asset allocation problem. A review of the literature is studied
to outline the main models proposed. However, those models often
lack
flexibility and tractability. The search for an optimal solution
to the asset allocation problem when considering ambiguity aversion
is often difficult to apply in practice on large dimension problems,
as the ones faced by modern financial investors. This constitutes
the motivation to put forward a novel methodology easily applicable,
robust,
flexible and tractable. The Ambiguity Robust Adjustment
(ARA) methodology is theoretically presented and then tested on a
large empirical data set. Several forms of the ARA are considered and
tested. Empirical evidence demonstrates that the ARA methodology
improves portfolio performances greatly.
Through the specific illustration of the asset allocation problem in
finance, this PhD thesis proposes a new general methodology that will
hopefully help decision makers to solve numerous different problems
under ambiguity
Technology strategies for low-carbon economic growth: a general equilibrium assessment
This paper investigates the potential for developing countries to mitigate greenhouse gas emissions without slowing their expected economic growth. A theoretical frame- work is developed that unifies bottom-up marginal abatement cost curves and partial equilibrium techno-economic simulation modeling with computational general equilibrium (CGE) modeling. The framework is then applied to engineering assessments of energy efficiency technology deployments in Armenia and Georgia. The results facilitate incorporation of bottom-up technology detail on energy-efficiency improvements into a CGE simulation of the economy-wide economic costs and mitigation benefits of technology deployment policies. Low-carbon growth trajectories are feasible in both countries, enabling reductions of up to 4 percent of baseline emissions while generating slight increases in GDP (1 percent in Armenia and 0.2 percent in Georgia). The results demonstrate how MAC curves can paint a misleading picture of the true potential for both abatement and economic growth when technological improvements operate within a system of general equilibrium interactions, but also highlight how using their underlying data to identify technology options with high opportunity cost elasticities of productivity improvement can lead to more accurate assessments of the macroeconomic consequences of technology strategies for low-carbon growth.http://documents.worldbank.org/curated/en/279241468256026769/Technology-strategies-for-low-carbon-economic-growth-a-general-equilibrium-assessmentPublished versio
Machine learning methods in finance: Recent applications and prospects
We study how researchers can apply machine learning (ML) methods in finance. We first establish that the two major categories of ML (supervised and unsupervised learning) address fundamentally different problems than traditional econometric approaches. Then, we review the current state of research on ML in finance and identify three archetypes of applications: (i) the construction of superior and novel measures, (ii) the reduction of prediction error, and (iii) the extension of the standard econometric toolset. With this taxonomy, we give an outlook on potential future directions for both researchers and practitioners. Our results suggest many benefits of ML methods compared to traditional approaches and indicate that ML holds great potential for future research in finance
- âŠ