173 research outputs found

    Subspace portfolios: design and performance comparison

    Get PDF
    Data processing and engineering techniques enable people to observe and better understand the natural and human-made systems and processes that generate huge amounts of various data types. Data engineers collect data created in almost all fields and formats, such as images, audio, and text streams, biological and financial signals, sensing and many others. They develop and implement state-of-the art machine learning (ML) and artificial intelligence (AI) algorithms using big data to infer valuable information with social and economic value. Furthermore, ML/AI methodologies lead to automate many decision making processes with real-time applications serving people and businesses. As an example, mathematical tools are engineered for analysis of financial data such as prices, trade volumes, and other economic indicators of instruments including stocks, options and futures in order to automate the generation, implementation and maintenance of investment portfolios. Among the techniques, subspace framework and methods are fundamental, and they have been successfully employed in widely used technologies and real-time applications spanning from Internet multimedia to electronic trading of financial products. In this dissertation, the eigendecomposition of empirical correlation matrix created from market data (normalized returns) for a basket of US equities plays a central role. Then, the merit of approximating such an empirical matrix by a Toeplitz matrix, where closed form solutions for its eigenvalues and eigenvectors exist, is investigated. More specifically, the exponential correlation model that populates such a Toeplitz matrix is used to approximate pairwise empirical correlations of asset returns in a portfolio. Hence, the analytically derived eigenvectors of such a random vector process are utilized to design its eigenportfolios. The performances of the model based and the traditional eigenportfolios are studied and compared to validate the proposed portfolio design method. It is shown that the model based designs yield eigenportfolios that track the variations of the market statistics closely and deliver comparable or better performance. The theoretical foundations of information theory and the rate-distortion theory that provide the basis for source coding methods, including transform coding, are revisited in the dissertation. This theoretical inquiry helps to construct the basic question of trade-offs between dimension of the eigensubspace versus the correlation structure of the random vector process it represents. The signal processing literature facilitates developing an efficient subspace partitioning algorithm to design novel portfolios by combining eigenportfolios of partitions for US equities that outperform the existing eigenportfolios (EP), market portfolios (MP), minimum variance portfolios (MVP), and hierarchical risk parity (HRP) portfolios for US equities. Additionally, the pdf-optimized quantizer framework is employed to sparse eigenportfolios in order to reduce the (trading) cost of their maintenance. Then, the concluding remarks are presented in the last section of the Dissertation

    High performance digital signal processing: Theory, design, and applications in finance

    Get PDF
    The way scientific research and business is conducted has drastically changed over the last decade. Big data and data-intensive scientific discovery are two terms that have been coined recently. They describe the tremendous amounts of noisy data, created extremely rapidly by various sensing devices and methods that need to be explored for information inference. Researchers and practitioners who can obtain meaningful information out of big data in the shortest time gain a competitive advantage. Hence, there is more need than ever for a variety of high performance computational tools for scientific and business analytics. Interest in developing efficient data processing methods like compression and noise filtering tools enabling real-time analytics of big data is increasing. A common concern in digital signal processing applications has been the lack of fast handling of observed data. This problem has been an active research topic being addressed by the progress in analytical tools allowing fast processing of big data. One particular tool is the Karhunen-Loève transform (KLT) (also known as the principal component analysis) where covariance matrix of a stochastic process is decomposed into its eigenvectors and eigenvalues as the optimal orthonormal transform. Specifically, eigenanalysis is utilized to determine the KLT basis functions. KLT is a widely employed signal analysis method used in applications including noise filtering of measured data and compression. However, defining KLT basis for a given signal covariance matrix demands prohibitive computational resources in many real-world scenarios. In this dissertation, engineering implementation of KLT as well as the theory of eigenanalysis for auto-regressive order one, AR(1), discrete stochastic processes are investigated and novel improvements are proposed. The new findings are applied to well-known problems in quantitative finance (QF). First, an efficient method to derive the explicit KLT kernel for AR(1) processes that utilizes a simple root finding method for the transcendental equations is introduced. Performance improvement over a popular numerical eigenanalysis algorithm, called divide and conquer, is shown. Second, implementation of parallel Jacobi algorithm for eigenanalysis on graphics processing units is improved such that the access to the dynamic random access memory is entirely coalesced. The speed is improved by a factor of 68.5 by the proposed method compared to a CPU implementation for a square matrix of size 1,024. Third, several tools developed and implemented in the dissertation are applied to QF problems such as risk analysis and portfolio risk management. In addition, several topics in QF, such as price models, Epps effect, and jump processes are investigated and new insights are suggested from a multi-resolution (multi-rate) signal processing perspective. It is expected to see this dissertation to make contributions in better understanding and bridging the analytical methods in digital signal processing and applied mathematics, and their wider utilization in the finance sector. The emerging joint research and technology development efforts in QF and financial engineering will benefit the investors, bankers, and regulators to build and maintain more robust and fair financial markets in the future

    Testing for monotonicity in expected asset returns

    Get PDF
    Many postulated relations in finance imply that expected asset returns strictly increase in an underlying characteristic. To examine the validity of such a claim, one needs to take the entire range of the characteristic into account, as is done in the recent proposal of Patton and Timmermann (2010). But their test is only a test for the direction of monotonicity, since it requires the relation to be monotonic from the outset: either weakly decreasing under the null or strictly increasing under the alternative. When the relation is non-monotonic or weakly increasing, the test can break down and falsely ‘establish’ a strictly increasing relation with high probability. We offer some alternative tests that do not share this problem. The behavior of the various tests is illustrated via Monte Carlo studies. We also present empirical applications to real data.Bootstrap, CAPM, monotonicity tests, non-monotonic relations

    Modelling sovereign debt with Lévy Processes

    Get PDF
    Mestrado em Ciências ActuariaisPropomos modelizar o risco de crédito soberano de cinco países da zona Euro (Portugal, Irlanda, Itália, Grécia e Espanha) seguindo uma abordagem estrutural de primeira passagem em que o movimento Browniano geométrico é substituído por um processo de Lévy regido apenas por uma componente de saltos. Deste modo, introduzimos incrementos assimétricos e leptocúrticos e a possibilidade de incumprimento instantâneo, removendo assim algumas das principais limitações do modelo Black-Scholes. Calculamos a probabilidade de sobrevivência como preço de uma opção barreira discreta, utilizando um método de valorização de opções baseado na aproximação da densidade de transição como expansão em série de Fourier de cossenos. Assumindo uma taxa de recuperação determinística, calibramos o modelo de Lévy Carr-Geman-Madan-Yor (CGMY) utilizando spreads de Credit Default Swaps semanais e obtemos a estrutura temporal de probabilidades de incumprimento. Tiramos ainda partido da representação do processo Variance Gamma (uma instância do modelo CGMY) como movimento Browniano modificado temporalmente para considerar uma estrutura de dependência entre os riscos de crédito soberanos através de uma modificação temporal comum. Em seguida, ilustramos um possível procedimento de calibração multidimensional e obtemos a distribuição de sobrevivência conjunta via simulação.We propose to model the sovereign credit risk of five Euro area countries (Portugal, Ireland, Italy, Greece and Spain) under a first passage structural approach, replacing the classical geometric Brownian motion dynamics with a pure jump Lévy process. This framework caters for skewness, fat tails and instantaneous defaults, thus addressing some of the main drawbacks of the Black-Scholes model. We compute the survival probability as the price of a discrete barrier option, using an option pricing method based on the approximation of the transition density as a Fourier-cosine series expansion. Assuming a deterministic recovery rate, we calibrate the Carr-Geman-Madan-Yor (CGMY) Lévy model to weekly Credit Default Swaps data and obtain the default probability term structure. By drawing on the representation of the Variance Gamma process (a particular instance of the CGMY model) as a time-changed Brownian motion, we accommodate dependency between sovereigns via a common time change. We then illustrate a possible multivariate calibration procedure and simulate the joint default distribution

    Subspace methods for portfolio design

    Get PDF
    Financial signal processing (FSP) is one of the emerging areas in the field of signal processing. It is comprised of mathematical finance and signal processing. Signal processing engineers consider speech, image, video, and price of a stock as signals of interest for the given application. The information that they will infer from raw data is different for each application. Financial engineers develop new solutions for financial problems using their knowledge base in signal processing. The goal of financial engineers is to process the harvested financial signal to get meaningful information for the purpose. Designing investment portfolios have always been at the center of finance. An investment portfolio is comprised of financial instruments such as stocks, bonds, futures, options, and others. It is designed based on risk limits and return expectations of investors and managed by portfolio managers. Modern Portfolio Theory (MPT) offers a mathematical method for portfolio optimization. It defines the risk as the standard deviation of the portfolio return and provides closed-form solution for the risk optimization problem where asset allocations are derived from. The risk and the return of an investment are the two inseparable performance metrics. Therefore, risk normalized return, called Sharpe ratio, is the most widely used performance metric for financial investments. Subspace methods have been one of the pillars of functional analysis and signal processing. They are used for portfolio design, regression analysis and noise filtering in finance applications. Each subspace has its unique characteristics that may serve requirements of a specific application. For still image and video compression applications, Discrete Cosine Transform (DCT) has been successfully employed in transform coding where Karhunen-Loeve Transform (KLT) is the optimum block transform. In this dissertation, a signal processing framework to design investment portfolios is proposed. Portfolio theory and subspace methods are investigated and jointly treated. First, KLT, also known as eigenanalysis or principal component analysis (PCA) of empirical correlation matrix for a random vector process that statistically represents asset returns in a basket of instruments, is investigated. Auto-regressive, order one, AR(1) discrete process is employed to approximate such an empirical correlation matrix. Eigenvector and eigenvalue kernels of AR(1) process are utilized for closed-form expressions of Sharpe ratios and market exposures of the resulting eigenportfolios. Their performances are evaluated and compared for various statistical scenarios. Then, a novel methodology to design subband/filterbank portfolios for a given empirical correlation matrix by using the theory of optimal filter banks is proposed. It is a natural extension of the celebrated eigenportfolios. Closed-form expressions for Sharpe ratios and market exposures of subband/filterbank portfolios are derived and compared with eigenportfolios. A simple and powerful new method using the rate-distortion theory to sparse eigen-subspaces, called Sparse KLT (SKLT), is developed. The method utilizes varying size mid-tread (zero-zone) pdf-optimized (Lloyd-Max) quantizers created for each eigenvector (or for the entire eigenmatrix) of a given eigen-subspace to achieve the desired cardinality reduction. The sparsity performance comparisons demonstrate the superiority of the proposed SKLT method over the popular sparse representation algorithms reported in the literature

    Option pricing under jump-diffusion models

    Get PDF
    Tese de mestrado, Matemática Financeira, Universidade de Lisboa, Faculdade de Ciências, 2016Nesta tese, apresentam-se métodos para resolver numericamente equações diferenciais por forma a obter preços de contractos financeiros. Em particular, dá-se ênfase a opções vanilla de estilo europeu e americano cujo activo subjacente segue um modelo de difusão com saltos. Quanto à distribuição destes útltimos, destacam-se o modelo de Merton, que considera que eles têm uma distribuição Normal, e o de Kou, onde é assumida uma dupla distribuição exponencial. Este tipo de modelos representa uma extensão dos clássicos modelos de difusão, como o famoso modelo de Black-Scholes-Merton, e tem como objectivo superar algumas das falhas inerentes a este último, tal como caudas muito curtas e picos baixos da distribuição do logaritmo dos retornos do activo, que não reflectem, em geral, o sentimento dos investores nos mercados financeiros, aliando, ao mesmo tempo, a simplicidade e eficiência dos modelos de difusão. Para alcançar o nosso objectivo, estabelece-se inicialmente qual é a equação que descreve a dinâmica do valor dos preços das opções referidas em relação a vários parâmetros, tal como o valor do preço do activo subjacente e o tempo até à maturidade. Em seguida, constróem-se partições para a resolução numérica do problema, através da discretização da função que descreve o preço do contracto financeiro por diferenças finitas. Esta abordagem é útil visto que permite obter preços de contractos cujo "payoff" não é tão simples quanto o de opções vanilla e para os quais não existem fórmulas fechadas ou semi-fechadas para o seu valor em cada momento do tempo até à sua maturidade. No final, expõem-se os resultados encontrados para diferentes resoluções das partições, comparados com referências da literatura, e apresentam-se algumas conclusões.In this dissertation, methods to solve numerically partial differential equations in order to obtain prices for contingent claims are presented. In particular, we highlight European and American style vanilla options, whose underlying asset follows a jumpdiffusion model. For the distribution of the jumps, the Merton and Kou models are studied. The former considers these have a Normal distribution, whereas the latter assumes a double-exponential. These type of models represent an extension of the classic diffusion models, such as the famous Black-Scholes-Merton, and has the goal of overcoming its flaws, such as thin tails and low peaks in the distribution of the logarithm of the asset returns, that do not reflect the general investors sentiment in the financial markets, while maintaining the simplicity and tractability inherent to diffusion models. To accomplish our goal, an equation describing the relation of the value of the referred options on several parameters, such as the time-to-maturity and the spot value of the underlying asset is suggested. We then build partitions in order to numerically solve our problem using finite differences, discretizing the function which provides the price of our contingent claim. This approach is useful, since it allows to obtain prices of contracts whose payoff is not as simple as the vanilla options’ and for which it does not exist closed or semi-closed formulae for its value at each point in time until maturity. Finally, we expose results found for each one of partitions considered, comparing them with values in the literature, and some conclusions are presented

    Stochastic Volatility and Jumps Driven by Continuous Time Markov Chains

    Get PDF
    This paper considers a model where there is a single state variable that drives the state of the world and therefore the asset price behavior. This variable evolves according to a multi-state continuous time Markov chain, as the continuous time counterpart of the Hamilton (1989) model. It derives the moment generating function of the asset log-price difference under very general assumptions about its stochastic process, incorporating volatility and jumps that can follow virtually any distribution, both of them being driven by the same state variable. For an illustration, the extreme value distribution is used as the jump distribution. The paper shows how GMM and conditional ML estimators can be constructed, generalizing Hamilton's filter for the continuous time case. The risk neutral process is constructed and contigent claim prices under this specification are derived, in the lines of Bakshi and Madan (2000). Finally, an empirical example is set up, to illustrate the potential benefits of the model.Option pricing, Markov chain, Moment generating function

    CorrGAN: Sampling Realistic Financial Correlation Matrices Using Generative Adversarial Networks

    Full text link
    We propose a novel approach for sampling realistic financial correlation matrices. This approach is based on generative adversarial networks. Experiments demonstrate that generative adversarial networks are able to recover most of the known stylized facts about empirical correlation matrices estimated on asset returns. This is the first time such results are documented in the literature. Practical financial applications range from trading strategies enhancement to risk and portfolio stress testing. Such generative models can also help ground empirical finance deeper into science by allowing for falsifiability of statements and more objective comparison of empirical methods
    • …
    corecore