395 research outputs found

    USING GENERATIVE ADVERSARIAL NETWORK AS A VALUE-AT-RISK ESTIMATOR

    Get PDF
    Value-at-risk (VaR) estimation is a critical task for modern financial institution. Most methods to estimate VaR rely on classical statistical methods. They produce reliable estimates but there is demand for ever more accurate estimates. Recently there has been major breakthroughs for machine learning models in other fields. This has led to increasing interest in applying machine learning for financial applications. This thesis applies new data-driven machine learning method, generative adversarial network (GAN), for (VaR) estimation. GAN was proposed for fake image generation. Since then it has found applications in multiple domains, such as finance. Estimating the true underlying distribution of financial time series is notoriously difficult task. GAN doesn’t explicitly estimate the underlying distribution but tries to generate new samples from the distribution. This thesis applies a basic GAN model to simulate stock market returns and then estimate the VaR from these. The experiments are conducted on S&P500-index. The GAN model is compared to a simple historical simulation baseline. In the experiments it becomes evident that the GAN model lacks robustness and responds poorly to changes in market. The GAN is unable to fully capture the statistical properties of stock market returns. It can replicate a little of the excess kurtosis present in stock market returns and some of the volatility clustering. The results show that the GAN model has tendency to estimate the VaR between a fairly narrow range. This is in contrast to historical simulation, which can respond to changes in the stock market. Machine learning models, especially neural networks like GANs, present challenges to financial practitioners. Although they provide sometimes more accurate estimates than traditional methods, they lack transparency. GANs have shown promise in the literature but suffer from being unstable to train. It is difficult to guess will a trained GAN work as it is meant to work. Regardless of these shortcomings, it is worthwhile to study GANs and other neural networks in finance. They have performed exceptionally in other fields. Researchers must try to open the black-box nature of the models. Interpretability of the models will allow their use in the financial industry. This thesis shows that more research is needed to provide robust estimates that can be relied on

    Essays in Robust and Data-Driven Risk Management

    Get PDF
    Risk defined as the chance that the outcome of an uncertain event is different than expected. In practice, the risk reveals itself in different ways in various applications such as unexpected stock movements in the area of portfolio management and unforeseen demand in the field of new product development. In this dissertation, we present four essays on data-driven risk management to address the uncertainty in portfolio management and capacity expansion problems via stochastic and robust optimization techniques.The third chapter of the dissertation (Portfolio Management with Quantile Constraints) introduces an iterative, data-driven approximation to a problem where the investor seeks to maximize the expected return of his/her portfolio subject to a quantile constraint, given historical realizations of the stock returns. Our approach involves solving a series of linear programming problems (thus) quickly solves the large scale problems. We compare its performance to that of methods commonly used in finance literature, such as fitting a Gaussian distribution to the returns. We also analyze the resulting efficient frontier and extend our approach to the case where portfolio risk is measured by the inter-quartile range of its return. Furthermore, we extend our modeling framework so that the solution calculates the corresponding conditional value at risk CVaR) value for the given quantile level.The fourth chapter (Portfolio Management with Moment Matching Approach) focuses on the problem where a manager, given a set of stocks to invest in, aims to minimize the probability of his/her portfolio return falling below a threshold while keeping the expected portfolio returnno worse than a target, when the stock returns are assumed to be Log-Normally distributed. This assumption, common in finance literature, creates computational difficulties. Because the portfolio return itself is difficult to estimate precisely. We thus approximate the portfolio re-turn distribution with a single Log-Normal random variable by the Fenton-Wilkinson method and investigate an iterative, data-driven approximation to the problem. We propose a two-stage solution approach, where the first stage requires solving a classic mean-variance optimization model, and the second step involves solving an unconstrained nonlinear problem with a smooth objective function. We test the performance of this approximation method and suggest an iterative calibration method to improve its accuracy. In addition, we compare the performance of the proposed method to that obtained by approximating the tail empirical distribution function to a Generalized Pareto Distribution, and extend our results to the design of basket options.The fifth chapter (New Product Launching Decisions with Robust Optimization) addresses the uncertainty that an innovative firm faces when a set of innovative products are planned to be launched a national market by help of a partner company for each innovative product. Theinnovative company investigates the optimal period to launch each product in the presence of the demand and partner offer response function uncertainties. The demand for the new product is modeled with the Bass Diffusion Model and the partner companies\u27 offer response functions are modeled with the logit choice model. The uncertainty on the parameters of the Bass Diffusion Model and the logic choice model are handled by robust optimization. We provide a tractable robust optimization framework to the problem which includes integer variables. In addition, weprovide an extension of the proposed approach where the innovative company has an option to reduce the size of the contract signed by the innovative firm and the partner firm for each product.In the sixth chapter (Log-Robust Portfolio Management with Factor Model), we investigate robust optimization models that address uncertainty for asset pricing and portfolio management. We use factor model to predict asset returns and treat randomness by a budget of uncertainty. We obtain a tractable robust model to maximize the wealth and gain theoretical insights into the optimal investment strategies

    Journal of Telecommunications and Information Technology, 2002, nr 3

    Get PDF
    kwartalni

    Quantitative Optimisation of Drilling for Brownfields Mineral Exploration

    Get PDF
    This research presents a novel optimisation framework for brownfields exploration drilling. The proposed optimisation methodology has been developed applying geostatistical methods and modern portfolio theory. The use of conditional simulations ensures that geological uncertainty is taken into account, and the application of Markowitz portfolio theory makes drilling funds allocation optimal. The proposed method closes the gap in current research by incorporating the inherent geological uncertainty of an exploration target and mineral economics

    High-Dimensional Machine Learning Models In Fintech

    Get PDF
    This thesis develops several forecasting models for simultaneously predicting the prices of d assets traded in financial markets, a most fundamental problem in the emerging area of ``FinTech\u27\u27. The models are optimized to address three critical challenges, C1. High-dimensional interactions between assets. Assets could interact (e.g., Amazon\u27s disclosure of its revenue change in cloud services could indicate that revenues also could change in other cloud providers). The number of possible interactions is quadratic in d, and is often much larger than the number of observations. C2. Non-linearity of the hypothesis class. Linear models are usually insufficient to characterize the relationship between the labels (responses) and the available information (features). C3. Data scarcity for each asset. The size of the data associated with an individual asset could be small. For example, a typical daily forecasting model based on technical factors uses three years (approx. 750 trading days) of data. We collect one data point for each day so only 750 observations are available for each asset. We develop the following works to address these challenges. Adaptive reduced rank regression (addressing C1). We examine a linear regression model y=Mx+ϵ that aims to directly capture the interactions between all features from all assets and all the responses, by estimating d×ω(d) entries in M using O(d) observations. In this setting, existing low-rank regularization techniques such as reduced rank regression or nuclear-norm based regularizations fail to work. Adaptive Reduced Rank Regression (Adaptive-RRR) is a new provable algorithm for estimating M under a mild assumption on the spectrum of the covariance matrix of x. On embedding stocks (addressing C1 & C2). We next propose a semi-parametric model called the additive influence model that decomposes the inference problem into two orthogonal subroutines. One subroutine is used to learn the high-dim interactions between entities, and we solve the problem with techniques developed for Adaptive-RRR. The other subroutine is used to learn the non-linear signals, and we solve the problem with practical algorithms such as deep learning and ensemble learning. Equity2Vec: Interaction beyond return correlations (addressing C2 & C3). We develop a specialized neural net model for each asset (e.g., train gi (∙) for asset i) but there is insufficient data to properly train gi with data only from i (because of C3). Our idea is to shrink gi (∙)’s toward one or more centroids to reduce model (sample) complexities. Specifically, we train a neural net model gi (x, W, Wi ) where W is shared across all entities, Wi is entity-specific and is learned through embedding, and gi (x) = gi (x, W, Wi ). When entities i and j are close, then Wi and Wj are close. Consequently, gi and gj will be similar when entity i and entity j are similar. The proposed algorithms/models are verified via extensive experiments based on real-world equity datasets. Our forecasting models can also be applied to a wide range of applications, such as identifying biomarkers, understanding risks associated with various diseases, image recognition, and link prediction
    corecore