146 research outputs found

    Understanding Financial Market Volatility

    Get PDF
    __Abstract__ Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. Loosely speaking, volatility is defined as the average magnitude of fluctuations observed in some phenomenon over time. Within the area of economics, this definition narrows to the variability of an unpredictable random component of a time series variable. Typical examples in finance are returns on assets, such as individual stocks or a stock index like the S&P 500 index. As indicated by the quote from Campbell et al. (1997), (financial market) volatility is central to financial economics. Since it is the most common measure of the risk involved in investments in traded securities, it plays a crucial role in portfolio management, risk management, and pricing of derivative securities including options and futures contracts. Volatility is therefore closely tracked by private investors, institutional investors like pension funds, central bankers and policy makers. For example, the so-called Basel accords contain regulations where banks are required to hold a certain amount of capital to cover the risks involved in their consumer loans, mortgages and other assets. An estimate of the volatility of these assets is a crucial input for determining these capital requirements. In addition, the financial crisis in 2007-2008 has proven that the impact of financial market volatility is not only limited to the financial industry. It shows that volatility may be costly for the economy as a whole. For example, extreme stock market volatility may negatively influence aggregate investments behavior, in particular as companies often require equity as a source of external financing. This thesis contributes to the volatility literature by investigating several relevant aspects of volatility. First, we focus on the parameter estimation of multivariate volatility models, which is problematic if the number of considered assets increases. Second, we consider the question what exactly causes financial market volatility? In this context, we relate volatility with various types of information. In addition, we pay attention to modeling volatility, by adapting volatility models such that they allow for including possible exogenous variables. Finally, we turn to forecasting techniques of volatility, with the focus on the combination of density forecasts

    The R Package MitISEM: Mixture of Student-t Distributions using Importance Sampling Weighted Expectation Maximization for Efficient and Robust Simulation

    Get PDF
    This paper presents the R package MitISEM, which provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. The package provides also an extended MitISEM algorithm, ‘sequential MitISEM’, which substantially decreases the computational time when the target density has to be approximated for increasing data samples. This occurs when the posterior distribution is updated with new observations and/or when one computes model probabilities using predictive likelihoods. We illustrate the MitISEM algorithm using three canonical statistical and econometric models that are characterized by several types of non-elliptical posterior shapes and that describe well-known data patterns in econometrics and finance. We show that the candidate distribution obtained by MitISEM outperforms those obtained by ‘naive’ approximations in terms of numerical efficiency. Further, the MitISEM approach can be used for Bayesian model comparison, using the predictive likelihoods

    A Class of Adaptive EM-based Importance Sampling Algorithms for Efficient and Robust Posterior and Predictive Simulation

    Get PDF
    A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-t densities that approximates accurately the target distribution -typically a posterior distribution, of which we only require a kernel - in the sense that the Kullback-Leibler divergence between target and mixture is minimized. We label this approach Mixture of t by Importance Sampling and Expectation Maximization (MitISEM). We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach, for importance sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model's mixture regimes' parameters. Third, we propose a partial MitISEM approach, which aims at approximating the marginal and conditional posterior distributions of subsets of model parameters, rather than the joint. This division can substantially reduce the dimension of the approximation problem

    Exponential ReLU Neural Network Approximation Rates for Point and Edge Singularities

    Full text link
    We prove exponential expressivity with stable ReLU Neural Networks (ReLU NNs) in H1(Ω)H^1(\Omega) for weighted analytic function classes in certain polytopal domains Ω\Omega, in space dimension d=2,3d=2,3. Functions in these classes are locally analytic on open subdomains D⊂ΩD\subset \Omega, but may exhibit isolated point singularities in the interior of Ω\Omega or corner and edge singularities at the boundary ∂Ω\partial \Omega. The exponential expression rate bounds proved here imply uniform exponential expressivity by ReLU NNs of solution families for several elliptic boundary and eigenvalue problems with analytic data. The exponential approximation rates are shown to hold in space dimension d=2d = 2 on Lipschitz polygons with straight sides, and in space dimension d=3d=3 on Fichera-type polyhedral domains with plane faces. The constructive proofs indicate in particular that NN depth and size increase poly-logarithmically with respect to the target NN approximation accuracy ε>0\varepsilon>0 in H1(Ω)H^1(\Omega). The results cover in particular solution sets of linear, second order elliptic PDEs with analytic data and certain nonlinear elliptic eigenvalue problems with analytic nonlinearities and singular, weighted analytic potentials as arise in electron structure models. In the latter case, the functions correspond to electron densities that exhibit isolated point singularities at the positions of the nuclei. Our findings provide in particular mathematical foundation of recently reported, successful uses of deep neural networks in variational electron structure algorithms.Comment: Found Comput Math (2022

    De Rham compatible Deep Neural Network FEM

    Full text link
    On general regular simplicial partitions T\mathcal{T} of bounded polytopal domains Ω⊂Rd\Omega \subset \mathbb{R}^d, d∈{2,3}d\in\{2,3\}, we construct \emph{exact neural network (NN) emulations} of all lowest order finite element spaces in the discrete de Rham complex. These include the spaces of piecewise constant functions, continuous piecewise linear (CPwL) functions, the classical ``Raviart-Thomas element'', and the ``N\'{e}d\'{e}lec edge element''. For all but the CPwL case, our network architectures employ both ReLU (rectified linear unit) and BiSU (binary step unit) activations to capture discontinuities. In the important case of CPwL functions, we prove that it suffices to work with pure ReLU nets. Our construction and DNN architecture generalizes previous results in that no geometric restrictions on the regular simplicial partitions T\mathcal{T} of Ω\Omega are required for DNN emulation. In addition, for CPwL functions our DNN construction is valid in any dimension d≥2d\geq 2. Our ``FE-Nets'' are required in the variationally correct, structure-preserving approximation of boundary value problems of electromagnetism in nonconvex polyhedra Ω⊂R3\Omega \subset \mathbb{R}^3. They are thus an essential ingredient in the application of e.g., the methodology of ``physics-informed NNs'' or ``deep Ritz methods'' to electromagnetic field simulation via deep learning techniques. We indicate generalizations of our constructions to higher-order compatible spaces and other, non-compatible classes of discretizations, in particular the ``Crouzeix-Raviart'' elements and Hybridized, Higher Order (HHO) methods

    Improving Density Forecasts and Value-at- Risk Estimates by Combining Densities

    Get PDF
    __Abstract__ We investigate the added value of combining density forecasts for asset return prediction in a specific region of support. We develop a new technique that takes into account model uncertainty by assigning weights to individual predictive densities using a scoring rule based on the censored likelihood. We apply this approach in the context of recently developed univariate volatility models (including HEAVY and Realized GARCH models), using daily returns from the S&P 500, DJIA, FTSE and Nikkei stock market indexes from 2000 until 2013. The results show that combined density forecasts based on the censored likelihood scoring rule significantly outperform pooling based on the log scoring rule and individual density forecasts. The same result, albeit less strong, holds when compared to combined density forecasts based on equal weights. In addition, VaR estimates improve a t the short horizon, in particular when compared to estimates based on equal weights or to the VaR estimates of the individual models

    Predicting Covariance Matrices with Financial Conditions Indexes

    Get PDF
    We model the impact of financial conditions on asset market volatility and correlation. We propose extensions of (factor-)GARCH models for volatility and DCC models for correlation that allow for including indexes that measure financial conditions. In our empirical application we consider daily stock returns of US deposit banks during the period 1994-2011, and proxy financial conditions by the Bloomberg Financial Conditions Index (FCI) which comprises the money, bond, and equity markets. We find that worse financial conditions are associated with both higher volatility and higher average correlations between stock returns. Especially during crises the additional impact of the FCI indicator is considerable, with an increase in correlations by 0.15. Moreover, including the FCI in volatility and correlation modeling improves Value-at-Risk forecasts, particularly at short horizons

    Forecasting Value-at-Risk under temporal and portfolio aggregation

    Get PDF
    We examine the impact of temporal and portfolio aggregation on the quality of Value-at-Risk (VaR) forecasts over a horizon of 10 trading days for a well-diversified portfolio of stocks, bonds and alternative investments. The VaR forecasts are constructed based on daily, weekly, or biweekly returns of all constituent assets separately, gathered into portfolios based on asset class, or into a single portfolio. We compare the impact of aggregation with that of choosing a model for the conditional volatilities and correlations, the distribution for the innovations, and the method of forecast construction. We find that the level of temporal aggregation is most important. Daily returns form the best basis for VaR forecasts. Modeling the portfolio at the asset or asset class level works better than complete portfolio aggregation, but differences are smaller. The differences from the model, distribution, and forecast choices are also smaller compared with temporal aggregation
    • …
    corecore