123 research outputs found

    Feasible Stein-Type and Preliminary Test Estimations in the System Regression Model

    Get PDF
    Publisher Copyright: © 2023 International Academic PressIn a system of regression models, finding a feasible shrinkage is demanding since the covariance structure is unknown and cannot be ignored. On the other hand, specifying sub-space restrictions for adequate shrinkage is vital. This study proposes feasible shrinkage estimation strategies where the sub-space restriction is obtained from LASSO. Therefore, some feasible LASSO-based Stein-type estimators are introduced, and their asymptotic performance is studied.publishersversionpublishe

    COVARIANCE MATRIX CONSTRUCTION AND ESTIMATION: CRITICAL ANALYSES AND EMPIRICAL CASES FOR PORTFOLIO APPLICATIONS

    Get PDF
    The thesis contributes to the financial econometrics literature by improving the estimation of the covariance matrix among financial time series. To such aim, existing econometrics tools have been investigated and improved, while new ones have been introduced in the field. The main goal is to improve portfolio construction for financial hedging, asset allocation and interest rates risk management. The empirical applicability of the proposed innovations has been tested trough several case studies, involving real and simulated datasets. The thesis is organised in three main chapters, each of those dealing with a specific financial challenge where the covariance matrix plays a central role. Chapter 2 tackles on the problem of hedging portfolios composed by energy commodities. Here, the underlying multivariate volatility among spot and futures securities is modelled with multivariate GARCH models. Under this specific framework, we propose two novel approaches to construct the covariance matrix among commodities, and hence the resulting long-short hedging portfolios. On the one hand, we propose to calculate the hedge ratio of each portfolio constituent to combine them later on in a unique hedged position. On the other hand, we propose to directly hedge the spot portfolio, incorporating in such way investor\u2019s risk and return preferences. Trough a comprehensive numerical case study, we assess the sensitivity of both approaches to volatility and correlation misspecification. Moreover, we empirically show how the two approaches should be implemented to hedge a crude oil portfolio. Chapter 3 focuses on the covariance matrix estimation when the underlying data show non\u2013Normality and High\u2013Dimensionality. To this extent, we introduce a novel estimator for the covariance matrix and its inverse \u2013 the Minimum Regularised Covariance Determinant estimator (MRCD) \u2013 from chemistry and criminology into our field. The aim is twofold: first, we improve the estimation of the Global Minimum Variance Portfolio by exploiting the MRCD closed form solution for the covariance matrix inverse. Trough an extensive Monte Carlo simulation study we check the effectiveness of the proposed approach in comparison to the sample estimator. Furthermore, we take on an empirical case study featuring five real investment universes characterised by different stylised facts and dimensions. Both simulation and empirical analysis clearly demonstrate the out\u2013of\u2013sample performance improvement while using the MRCD. Second, we turn our attention on modelling the relationships among interest rates, comparing five covariance matrix estimators. Here, we extract the principal components driving the yield curve volatility to give important insights on fixed income portfolio construction and risk management. An empirical application involving the US term structure illustrates the inferiority of the sample covariance matrix to deal with interest rates. In chapter 4, we improve the shrinkage estimator for four risk-based portfolios. In particular, we focus on the target matrix, investigating six different estimators. By the mean of an extensive numerical example, we check the sensitivity of each risk-based portfolio to volatility and correlation misspecification in the target matrix. Furthermore, trough a comprehensive Monte Carlo experiment, we offer a comparative study of the target estimators, testing their ability in reproducing the true portfolio weights. Controlling for the dataset dimensionality and the shrinkage intensity, we find out that the Identity and Variance Identity target estimators are the best targets towards which to shrink, always holding good statistical properties

    Distributed Supervised Statistical Learning

    Get PDF
    We live in the era of big data, nowadays, many companies face data of massive size that, in most cases, cannot be stored and processed on a single computer. Often such data has to be distributed over multiple computers which then makes the storage, pre-processing, and data analysis possible in practice. In the age of big data, distributed learning has gained popularity as a method to manage enormous datasets. In this thesis, we focus on distributed supervised statistical learning where sparse linear regression analysis is performed in a distributed framework. These methods are frequently applied in a variety of disciplines tackling large scale datasets analysis, including engineering, economics, and finance. In distributed learning, one key question is, for example, how to efficiently aggregate multiple estimators that are obtained based on data subsets stored on multiple computers. We investigate recent studies on distributed statistical inferences. There have been many efforts to propose efficient ways of aggregating local estimates, most popular methods are discussed in this thesis. Recently, an important question about the number of machines to deploy is addressed for several estimation methods, notable answers to the question are reviewed in this literature. We have considered a specific class of Liu-type shrinkage estimation methods for distributed statistical inference. We also conduct a Monte Carlo simulation study to assess performance of the Liu-type shrinkage estimation methods in a distributed framework

    Regularized System Identification

    Get PDF
    This open access book provides a comprehensive treatment of recent developments in kernel-based identification that are of interest to anyone engaged in learning dynamic systems from data. The reader is led step by step into understanding of a novel paradigm that leverages the power of machine learning without losing sight of the system-theoretical principles of black-box identification. The authors’ reformulation of the identification problem in the light of regularization theory not only offers new insight on classical questions, but paves the way to new and powerful algorithms for a variety of linear and nonlinear problems. Regression methods such as regularization networks and support vector machines are the basis of techniques that extend the function-estimation problem to the estimation of dynamic models. Many examples, also from real-world applications, illustrate the comparative advantages of the new nonparametric approach with respect to classic parametric prediction error methods. The challenges it addresses lie at the intersection of several disciplines so Regularized System Identification will be of interest to a variety of researchers and practitioners in the areas of control systems, machine learning, statistics, and data science. This is an open access book

    Survey of quantitative investment strategies in the Russian stock market : Special interest in tactical asset allocation

    Get PDF
    Russia’s financial markets have been an uncharted area when it comes to exploring the performance of investment strategies based on modern portfolio theory. In this thesis, we focus on the country’s stock market and study whether profitable investments can be made while at the same time taking uncertainties, risks, and dependencies into account. We also pay particular interest in tactical asset allocation. The benefit of this approach is that we can utilize time series forecasting methods to produce trading signals in addition to optimization methods. We use two datasets in our empirical applications. The first one consists of nine sectoral indices covering the period from 2008 to 2017, and the other includes altogether 42 stocks listed on the Moscow Exchange covering the years 2011 – 2017. The strategies considered have been divided into five sections. In the first part, we study classical and robust mean-risk portfolios and the modeling of transaction costs. We find that the expected return should be maximized per unit expected shortfall while simultaneously requiring that each asset contributes equally to the portfolio’s tail risk. Secondly, we show that using robust covariance estimators can improve the risk-adjusted returns of minimum variance portfolios. Thirdly, we note that robust optimization techniques are best suited for conservative investors due to the low volatility allocations they produce. In the second part, we employ statistical factor models to estimate higher-order comoments and demonstrate the benefit of the proposed method in constructing risk-optimal and expected utility-maximizing portfolios. In the third part, we utilize the Almgren–Chriss framework and sort the expected returns according to the assumed momentum anomaly. We discover that this method produces stable allocations performing exceptionally well in the market upturn. In the fourth part, we show that forecasts produced by VECM and GARCH models can be used profitably in optimizations based on the Black–Litterman, copula opinion pooling, and entropy pooling models. In the final part, we develop a wealth protection strategy capable of timing market changes thanks to the return predictions based on an ARIMA model. Therefore, it can be stated that it has been possible to make safe and profitable investments in the Russian stock market even when reasonable transaction costs have been taken into account. We also argue that market inefficiencies could have been exploited by structuring statistical arbitrage and other tactical asset allocation-related strategies.Venäjän rahoitusmarkkinat ovat olleet kartoittamatonta aluetta tutkittaessa moderniin portfolioteoriaan pohjautuvien sijoitusstrategioiden käyttäytymistä. Tässä tutkielmassa keskitymme maan osakemarkkinoihin ja tarkastelemme, voidaanko taloudellisesti kannattavia sijoituksia tehdä otettaessa samalla huomioon epävarmuudet, riskit ja riippuvuudet. Kiinnitämme erityistä huomiota myös taktiseen varojen kohdentamiseen. Tämän lähestymistavan etuna on, että optimointimenetelmien lisäksi voimme hyödyntää aikasarjaennustamisen menetelmiä kaupankäyntisignaalien tuottamiseksi. Empiirisissä sovelluksissa käytämme kahta data-aineistoa. Ensimmäinen koostuu yhdeksästä teollisuusindeksistä kattaen ajanjakson 2008–2017, ja toinen sisältää 42 Moskovan pörssiin listattua osaketta kattaen vuodet 2011–2017. Tarkasteltavat strategiat on puolestaan jaoteltu viiteen osioon. Ensimmäisessä osassa tarkastelemme klassisia ja robusteja riski-tuotto -portfolioita sekä kaupankäyntikustannusten mallintamista. Havaitsemme, että odotettua tuottoa on syytä maksimoida suhteessa odotettuun vajeeseen edellyttäen samalla, että jokainen osake lisää sijoitussalkun häntäriskiä yhtä suurella osuudella. Toiseksi osoitamme, että minimivarianssiportfolioiden riskikorjattuja tuottoja voidaan parantaa robusteilla kovarianssiestimaattoreilla. Kolmanneksi toteamme robustien optimointitekniikoiden soveltuvan parhaiten konservatiivisille sijoittajille niiden tuottamien matalan volatiliteetin allokaatioiden ansiosta. Toisessa osassa hyödynnämme tilastollisia faktorimalleja korkeampien yhteismomenttien estimoinnissa ja havainnollistamme ehdotetun metodin hyödyllisyyttä riskioptimaalisten sekä odotettua hyötyä maksimoivien salkkujen rakentamisessa. Kolmannessa osassa käytämme Almgren–Chrissin viitekehystä ja asetamme odotetut tuotot suuruusjärjestykseen oletetun momentum-anomalian mukaisesti. Havaitsemme, että menetelmä tuottaa vakaita allokaatioita menestyen erityisen hyvin noususuhdanteessa. Neljännessä osassa osoitamme, että VECM- että GARCH-mallien tuottamia ennusteita voidaan hyödyntää kannattavasti niin Black–Littermanin malliin kuin kopulanäkemysten ja entropian poolaukseenkin perustuvissa optimoinneissa. Viimeisessä osassa laadimme varallisuuden suojausstrategian, joka kykenee ajoittamaan markkinoiden muutoksia ARIMA-malliin perustuvien tuottoennusteiden ansiosta. Voidaan siis todeta, että Venäjän osakemarkkinoilla on ollut mahdollista tehdä turvallisia ja tuottavia sijoituksia myös silloin kun kohtuulliset kaupankäyntikustannukset on huomioitu. Toiseksi väitämme, että markkinoiden tehottomuutta on voitu hyödyntää suunnittelemalla tilastolliseen arbitraasiin ja muihin taktiseen varojen allokointiin pohjautuvia strategioita

    Scalable Bayesian inversion with Poisson data

    Get PDF
    Poisson data arise in many important inverse problems, e.g., medical imaging. The stochastic nature of noisy observation processes and imprecise prior information implies that there exists an ensemble of solutions consistent with the given Poisson data to various extents. Existing approaches, e.g., maximum likelihood and penalised maximum likelihood, incorporate the statistical information for point estimates, but fail to provide the important uncertainty information of various possible solu- tions. While full Bayesian approaches can solve this problem, the posterior distributions are often intractable due to their complicated form and the curse of dimensionality. In this thesis, we investigate approximate Bayesian inference techniques, i.e., variational inference (VI), expectation propagation (EP) and Bayesian deep learning (BDL), for scalable posterior exploration. The scalability relies on leveraging 1) mathematical structures emerging in the problems, i.e., the low rank structure of forward operators and the rank 1 projection form of factors in the posterior distribution, and 2) efficient feed forward processes of neural networks and further reduced training time by flexibility of dimensions with incorporating forward and adjoint operators. Apart from the scalability, we also address theoretical analysis, algorithmic design and practical implementation. For VI, we derive explicit functional form and analyse the convergence of algorithms, which are long-standing problems in the literature. For EP, we discuss how to incorporate nonnegative constraints and how to design stable moment evaluation schemes, which are vital and nontrivial practical concerns. For BDL, specifically conditional variational auto-encoders (CVAEs), we investigate how to apply them for uncertainty quantification of inverse problems and develop flexible and novel frameworks for general Bayesian Inversion. Finally, we justify these contributions with numerical experiments and show the competitiveness of our proposed methods by comparing with state-of-the-art benchmarks

    Structures de corrélation partiellement échangeables : inférence et apprentissage automatique

    Get PDF

    Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain

    Get PDF
    The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio

    Multiresolution models in image restoration and reconstruction with medical and other applications

    Get PDF
    corecore