658 research outputs found
Bayesian Model Selection for Beta Autoregressive Processes
We deal with Bayesian inference for Beta autoregressive processes. We
restrict our attention to the class of conditionally linear processes. These
processes are particularly suitable for forecasting purposes, but are difficult
to estimate due to the constraints on the parameter space. We provide a full
Bayesian approach to the estimation and include the parameter restrictions in
the inference problem by a suitable specification of the prior distributions.
Moreover in a Bayesian framework parameter estimation and model choice can be
solved simultaneously. In particular we suggest a Markov-Chain Monte Carlo
(MCMC) procedure based on a Metropolis-Hastings within Gibbs algorithm and
solve the model selection problem following a reversible jump MCMC approach
Recommended from our members
A Stochastic Volatility Model With Realized Measures for Option Pricing
Based on the fact that realized measures of volatility are affected by measurement errors, we introduce a new family of discrete-time stochastic volatility models having two measurement equations relating both observed returns and realized measures to the latent conditional variance. A semi-analytical option pricing framework is developed for this class of models. In addition, we provide analytical filtering and smoothing recursions for the basic specification of the model, and an effective MCMC algorithm for its richer variants. The empirical analysis shows the effectiveness of filtering and smoothing realized measures in inflating the latent volatility persistence—the crucial parameter in pricing Standard and Poor’s 500 Index options
Monte carlo within simulated annealing for integral constrained optimizations
For years, Value-at-Risk and Expected Shortfall have been well established measures of market risk and the Basel Committee on Banking Supervision recommends their use when controlling risk. But their computations might be intractable if we do not rely on simplifying assumptions, in particular on distributions of returns. One of the difficulties is linked to the need for Integral Constrained Optimizations. In this article, two new stochastic optimization-based Simulated Annealing algorithms are proposed for addressing problems associated with the use of statistical methods that rely on extremizing a non-necessarily differentiable criterion function, therefore facing the problem of the computation of a non-analytically reducible integral constraint. We first provide an illustrative example when maximizing an integral constrained likelihood for the stress-strength reliability that confirms the effectiveness of the algorithms. Our results indicate no clear difference in convergence, but we favor the use of the problem approximation strategy styled algorithm as it is less expensive in terms of computing time. Second, we run a classical financial problem such as portfolio optimization, showing the potential of our proposed methods in financial applications
Hierarchical Species Sampling Models
This paper introduces a general class of hierarchical nonparametric prior distributions. The random probability measures are constructed by a hierarchy of generalized species sampling processes with possibly non-diffuse base measures. The proposed framework provides a general probabilistic foundation for hierarchical random measures with either atomic or mixed base measures and allows for studying their properties, such as the distribution of the marginal and total number of clusters. We show that hierarchical species sampling models have a Chinese Restaurants Franchise representation and can be used as prior distributions to undertake Bayesian nonparametric inference. We provide a method to sample from the posterior distribution together with some numerical illustrations. Our class of priors includes some new hierarchical mixture priors such as the hierarchical Gnedin measures, and other well-known prior distributions such as the hierarchical Pitman-Yor and the hierarchical normalized random measures
Endogeneity in Interlocks and Performance Analysis: A Firm Size Perspective
This paper contributes to the literature on interlocking directorates (ID) by providing a new solution to the two econometric issues arising in the joint analysis of interlocks and firm performance which are the endogenous nature of ID and sample selection bias due to the exclusion of isolated firms. Some key determinants of ID network formation are identified and used to check for endogeneity. We analyze the impact of the positioning in the network on firms’ performance and inspect how the impact varies across firms of different sizes drawing on information relating to 37,324 firms in the interlocking network which, to our knowledge, is the widest dataset ever used in approaching the study of ID. Our results, made robust for endogeneity and sample selection bias, suggest that eigenvector centrality and the clustering coefficient have a positive and significant impact on all the performance measures and that this effect is more pronounced for small firms
Generalized Poisson difference autoregressive processes
This paper introduces a novel stochastic process with signed integer values. Its autoregressive dynamics effectively captures persistence in conditional moments, rendering it a valuable feature for forecasting applications. The increments follow a Generalized Poisson distribution, capable of accommodating over- and under-dispersion in the conditional distribution, thereby extending standard Poisson difference models. We derive key properties of the process, including stationarity conditions, the stationary distribution, and conditional and unconditional moments, which prove essential for accurate forecasting. We provide a Bayesian inference framework with an efficient posterior approximation based on Markov Chain Monte Carlo. This approach seamlessly incorporates inherent parameter uncertainty into predictive distributions. The effectiveness of the proposed model is demonstrated through applications to benchmark datasets on car accidents and an original dataset on cyber threats, highlighting its superior fitting and forecasting capabilities compared to standard Poisson model
COVID-19 spreading in financial networks: A semiparametric matrix regression model
Network models represent a useful tool to describe the complex set of financial relationships among heterogeneous firms in the system. A new Bayesian semiparametric model for temporal multilayer networks with both intra- and inter-layer connectivity is proposed. A hierarchical mixture prior distribution is assumed to capture heterogeneity in the response of the network edges to a set of risk factors including the number of COVID-19 cases in Europe. Two layers, defined by stock returns and volatilities are considered and within and between layers connectivity is investigated. The financial connectedness arising from the interactions between two layers is measured. The model is applied in order to compare the topology of the network before and after the spreading of the COVID-19 disease
COVID-19 spreading in financial networks: A semiparametric matrix regression model
Network models represent a useful tool to describe the complex set of financial relationships among heterogeneous firms in the system. A new Bayesian semiparametric model for temporal multilayer networks with both intra- and inter-layer connectivity is proposed. A hierarchical mixture prior distribution is assumed to capture heterogeneity in the response of the network edges to a set of risk factors including the number of COVID-19 cases in Europe. Two layers, defined by stock returns and volatilities are considered and within and between layers connectivity is investigated. The financial connectedness arising from the interactions between two layers is measured. The model is applied in order to compare the topology of the network before and after the spreading of the COVID-19 disease
- …