98 research outputs found

    Model Regresi untuk Return Aset dengan Volatilitas Mengikuti Model GARCH(1,1) Berdistribusi Epsilon-Skew Normal dan Student-t

    Get PDF
    Studi ini mendiskusikan dua perluasan dari model GARCH(1,1), yaitu AR(1)-GARCH(1,1) dan MA(1)-GARCH(1,1), yang diperoleh dengan cara menambahkan Autoregression tingkat 1 atau Moving Average tingkat 1 pada persamaan return. Untuk kasus ini, error dari return diasumsikan berdistribusi Normal, Skew Normal (SN), Epsilon Skew Normal (ESN), dan Student-t. Analisis terhadap model didasarkan pada pencocokan model untuk return dari indeks saham FTSE100 periode harian dari Januari 2000 sampai Desember 2017 dan indeks saham TOPIX periode harian dari Januari 2000 sampai Desember 2014. Model yang dipelajari diestimasi menggunakan metode GRG (Generalized Reduced Gradient) Non Linear yang tersedia di Solver Excel dan juga metode Adaptive Random Walk Metropolis (ARWM) yang diimplementasikan pada program Scilab. Hasil estimasi dari kedua alat bantu tersebut menunjukkan nilai-nilai yang hampir sama, mengindikasikan bahwa Solver Excel mempunyai kemampuan yang handal dalam mengestimasi parameter model. Uji rasio log-likelihood dan AIC (Akaike Information Criterion) menunjukkan bahwa model dengan distribusi ESN lebih unggul dibandingkan dengan model-model berdistribusi tipe normal lainnya untuk setiap kasus model dan data pengamatan, bahkan ini bisa mengungguli distribusi Student-t pada suatu model dan data pengamatan. Lebih lanjut, model-model dengan penambahan proses regresi di persamaan return menyediakan pencocokan yang lebih baik daripada model dasar, dimana pencocokan terbaik untuk kedua data pengamatan diberikan oleh model AR(1)-GARCH(1,1) berdistribusi Student-t

    Применение метода максимизации полинома для оценивания параметров авторегрессионных моделей с негаусовскими инновациями

    Get PDF
    Zabolotniy S.V. The application of the polynomial maximization method for estimation parameters of autoregressive models with non-gaussian innovationsСекция 5. ОБРАБОТКА СИГНАЛОВ, ИЗОБРАЖЕНИЙ И ВИДЕ

    Skew generalized normal innovations for the AR(p) process endorsing asymmetry

    Get PDF
    The assumption of symmetry is often incorrect in real-life statistical modeling due to asymmetric behavior in the data. This implies a departure from the well-known assumption of normality defined for innovations in time series processes. In this paper, the autoregressive (AR) process of order p (i.e., the AR(p) process) is of particular interest using the skew generalized normal (SGN) distribution for the innovations, referred to hereafter as the ARSGN(p) process, to accommodate asymmetric behavior. This behavior presents itself by investigating some properties of the SGN distribution, which is a fundamental element for AR modeling of real data that exhibits non-normal behavior. Simulation studies illustrate the asymmetry and statistical properties of the conditional maximum likelihood (ML) parameters for the ARSGN(p) model. It is concluded that the ARSGN(p) model accounts well for time series processes exhibiting asymmetry, kurtosis, and heavy tails. Real time series datasets are analyzed, and the results of the ARSGN(p) model are compared to previously proposed models. The findings here state the effectiveness and viability of relaxing the normal assumption and the value added for considering the candidacy of the SGN for AR time series processes.The National Research Foundation, South Africa, the South African NRF SARChI Research Chair in Computational and Methodological Statistics, the South African DST-NRF-MRC SARChI Research Chair in Biostatistics and the Research Development Programme at UP.http://www.mdpi.com/journal/symmetryam2021Statistic

    Modeling and forecasting macroeconomic downside risk

    Get PDF
    We model permanent and transitory changes of the predictive density of U.S. GDP growth. A substantial increase in downside risk to U.S. economic growth emerges over the last 30 years, associated with the long-run growth slowdown started in the early 2000s. Conditional skewness moves procyclically, implying negatively skewed predictive densities ahead and during recessions, often anticipated by deteriorating financial conditions. Conversely, positively skewed distributions characterize expansions. The modeling framework ensures robustness to tail events, allows for both dense or sparse predictor designs, and delivers competitive out-of-sample (point, density and tail) forecasts, improving upon standard benchmarks

    Conditional asymmetries and downside risks in macroeconomic and financial time series

    Get PDF
    Macroeconomic and financial time series often display non-Gaussian features. In this thesis, I study the importance of conditional asymmetry in economic decisions, related to policy making or portfolio management. A novel toolbox to aid decision makers to evaluate the balance of risks in a coherent way is proposed and employed to investigate the relevance of modeling time-varying skewness in the context of improving the prediction accuracy of policy variables. The thesis consists of four papers. The first paper introduces the modeling framework which features permanent and transitory dynamics, robustness to tail events, allows for both dense or sparse predictor designs, and delivers competitive out-of-sample (point, density and tail) forecasts. We document procyclical movements in the conditional skewness of US business cycle, and a substantial increase in downside risk to US economic growth the last 30 years. In the second paper we investigate the historical determinants of US core inflation. We find substantial non-linearities in the relation between price growth and fiscal and monetary developments in the post war era. These generate asymmetric inflation risks over the long-run, which shape the balance of risks to the inflation outlook. We show that, when inflation risks are skewed, policy makers need to adjust their actions to offset the perceived level skewness. The third paper studies the impact of conditional asymmetry in a portfolio allocation context. Focusing on momentum returns, we show that the risk return trade-off of the strategy reflects a non-linear interaction between conditional volatility and skewness. We derive a dynamic skewness adjustment within a maximum Sharpe ratio strategy and find improvements upon existing volatility managed momentum portfolios. In the last paper I review the properties of the Epsilon-Skew-t distribution, a building-block of this thesis, and I develop a parametric procedure to test for the presence of conditional asymmetry in time series data

    The GARCH-EVT-Copula model and simulation in scenario-based asset allocation

    Get PDF
    Financial market integration, in particular, portfolio allocations from advanced economies to South African markets, continues to strengthen volatility linkages and quicken volatility transmissions between participating markets. Largely as a result, South African portfolios are net recipients of returns and volatility shocks emanating from major world markets. In light of these, and other, sources of risk, this dissertation proposes a methodology to improve risk management systems in funds by building a contemporary asset allocation framework that offers practitioners an opportunity to explicitly model combinations of hypothesised global risks and the effects on their investments. The framework models portfolio return variables and their key risk driver variables separately and then joins them to model their combined dependence structure. The separate modelling of univariate and multivariate (MV) components admits the benefit of capturing the data generating processes with improved accuracy. Univariate variables were modelled using ARMA-GARCH-family structures paired with a variety of skewed and leptokurtic conditional distributions. Model residuals were fit using the Peaks-over-Threshold method from Extreme Value Theory for the tails and a non-parametric, kernel density for the interior, forming a completed semi-parametric distribution (SPD) for each variable. Asset and risk factor returns were then combined and their dependence structure jointly modelled with a MV Student t copula. Finally, the SPD margins and Student t copula were used to construct a MV meta t distribution. Monte Carlo simulations were generated from the fitted MV meta t distribution on which an out-of-sample test was conducted. The 2014-to-2015 horizon served to proxy as an out-of-sample, forward-looking scenario for a set of key risk factors against which a hypothetical, diversified portfolio was optimised. Traditional mean-variance and contemporary mean-CVaR optimisation techniques were used and their results compared. As an addendum, performance over the in-sample 2008 financial crisis was reported. The final Objective (7) addressed management and conservation strategies for the NMBM. The NMBM wetland database that was produced during this research is currently being used by the Municipality and will be added to the latest National Wetland Map. From the database, and tools developed in this research, approximately 90 wetlands have been identified as being highly vulnerable due to anthropogenic and environmental factors (Chapter 6) and should be earmarked as key conservation priority areas. Based on field experience and data collected, this study has also made conservation and rehabilitation recommendations for eight locations. Recommendations are also provided for six more wetland systems (or regions) that should be prioritised for further research, as these systems lack fundamental information on where the threat of anthropogenic activities affecting them is greatest. This study has made a significant contribution to understanding the underlying geomorphological processes in depressions, seeps and wetland flats. The desktop mapping component of this study illustrated the dominance of wetlands in the wetter parts of the Municipality. Perched wetland systems were identified in the field, on shallow bedrock, calcrete or clay. The prevalence of these perches in depressions, seeps and wetland flats also highlighted the importance of rainfall in driving wetland formation, by allowing water to pool on these perches, in the NMBM. These perches are likely to be a key factor in the high number of small, ephemeral wetlands that were observed in the study area, compared to other semi-arid regions. Therefore, this research highlights the value of multi-faceted and multi-scalar wetland research and how similar approaches should be used in future research methods has been highlighted. The approach used, along with the tools/methods developed in this study have facilitated the establishment of priority areas for conservation and management within the NMBM. Furthermore, the research approach has revealed emergent wetland properties that are only apparent when looking at different spatial scales. This research has highlighted the complex biological and geomorphological interactions between wetlands that operate over various spatial and temporal scales. As such, wetland management should occur across a wetland complex, rather than individual sites, to account for these multi-scalar influences

    Improvement of Vector Autoregression (VAR) estimation using Combine White Noise (CWN) technique

    Get PDF
    Previous studies revealed that Exponential Generalized Autoregressive Conditional Heteroscedastic (EGARCH) outperformed Vector Autoregression (VAR) when data exhibit heteroscedasticity. However, EGARCH estimation is not efficient when the data have leverage effect. Therefore, in this study the weaknesses of VAR and EGARCH were modelled using Combine White Noise (CWN). The CWN model was developed by integrating the white noise of VAR with EGARCH using Bayesian Model Averaging (BMA) for the improvement of VAR estimation. First, the standardized residuals of EGARCH errors (heteroscedastic variance) were decomposed into equal variances and defined as white noise series. Next, this series was transformed into CWN model through BMA. The CWN was validated using comparison study based on simulation and four countries real data sets of Gross Domestic Product (GDP). The data were simulated by incorporating three sample sizes with low, moderate and high values of leverages and skewness. The CWN model was compared with three existing models (VAR, EGARCH and Moving Average (MA)). Standard error, log-likelihood, information criteria and forecast error measures were used to evaluate the performance of the models. The simulation findings showed that CWN outperformed the three models when using sample size of 200 with high leverage and moderate skewness. Similar results were obtained for the real data sets where CWN outperformed the three models with high leverage and moderate skewness using France GDP. The CWN also outperformed the three models when using the other three countries GDP data sets. The CWN was the most accurate model of about 70 percent as compared with VAR, EGARCH and MA models. These simulated and real data findings indicate that CWN are more accurate and provide better alternative to model heteroscedastic data with leverage effect

    Modeling and simulation of value -at -risk in the financial market area

    Get PDF
    Value-at-Risk (VaR) is a statistical approach to measure market risk. It is widely used by banks, securities firms, commodity and energy merchants, and other trading organizations. The main focus of this research is measuring and analyzing market risk by modeling and simulation of Value-at-Risk for portfolios in the financial market area. The objectives are (1) predicting possible future loss for a financial portfolio from VaR measurement, and (2) identifying how the distributions of the risk factors affect the distribution of the portfolio. Results from (1) and (2) provide valuable information for portfolio optimization and risk management. The model systems chosen for this study are multi-factor models that relate risk factors to the portfolio\u27s value. Regression analysis techniques are applied to derive linear and quadratic multifactor models for the assets in the portfolio. Time series models, such as ARIMA and state-space, are used to forecast the risk factors of the portfolio. The Monte Carlo simulation process is developed to comprehensively simulate the risk factors according to the four major distributions used to describe data in the financial market. These distributions are: multivariate normal, multivariate t, multivariate skew-normal, and multivariate skew t. The distribution of the portfolio is characterized by combining the multifactor models with the Monte Carlo simulation process. Based on the characterization of the portfolio distribution, any VaR measure of the portfolio can be calculated. The results of the modeling and simulation show that (1) a portfolio may not have the same kind of distribution as the risk factors if the relationship between the portfolio and the risk factors is expressed as a quadratic function; (2) the normal distribution underestimates risk if the real data have a heavy tail and a high peak; and (3) diversification is the best strategy of investment since it reduces the VaR by combining assets together. The computational approach developed in this dissertation can be used for any VaR measurement in any area as long as the relationship between an asset and risk factors can be modeled and the joint distribution of risk factors can be characterized

    APARCH Models Estimated by Support Vector Regression

    Get PDF
    This thesis presents a comprehensive study of asymmetric power autoregressive conditional heteroschedasticity (APARCH) models for modelling volatility in financial return data. The goal is to estimate and forecast volatility in financial data with excess kurtosis, volatility clustering and asymmetric distribution. Models based on maximum likelihood estimation (MLE) will be compared to the kernel based support vector regression (SVR). The popular Gaussian kernel and a wavelet based kernel will be used for the SVR. The methods will be tested on empirical data, including stock index prices, credit spreads and electric power prices. The results indicate that asymmetric power models are needed to capture the asseymtry in the data. Furthermore, SVR models are able to improve estimation and forecasting accuracy, compared with the APARCH models based on MLE.Masteroppgave i statistikkSTAT399MAMN-STA
    corecore