36 research outputs found
Maximum Likelihood Estimation of the APARCH model with Skew Distribution for the Innovation Process
A method normally used in empirical financial studies to estimate the parameters of a general autoregressive conditional heteroskedasticity model is the quasi-maximum likelihood, which maximizes the likelihood function assuming conditional normality, also if it can be a false assumption. When it is possible to assume a nonnormal distribution of errors for this kind of models, it has been shown that there is a loss of efficiency of quasi-maximum likelihood estimators in finite samples with respect to maximum likelihood estimators. In this paper we study, with an empirical application to the daily returns of NASDAQ stock market index, the maximum likelihood estimates of the parameters of the asymmetric power ARCH model, a generalization of the general autoregressive conditional heteroskedasticity model, with skew distributions for the innovation process. The distributions considered are the Student-t, the exponential power and the generalized secant hyperbolic distributions, with reparametrization of the densities which adds inverse scale factors in positive and negative orthants in order to take the
skewness into account. For comparison, we have analyzed the daily returns also with the quasi-maximum and the semiparametric maximum likelihood estimation procedures. We have used a quasi-Newton algorithm to optimize the average log-likelihood functions, in which analytical derivatives of the parameters have been obtained by MathStatica, a package of the computer algebra system Mathematica
Estimating distribution functions in Johnson translation system by the starship procedure with simulated annealing
The computer intensive starship procedure by Owen allows to obtain the best transformation to normality using the global optimization of some measure of non-normality. In this paper, we propose to apply the procedure to estimate a cumulative distribution function in the Johnson translation system by means of the optimization of sampling statistics derived by the minimum distance and non-linear least squares methods. As global optimization method we consider a stochastic optimization method, specifically the simulated annealing, as an alternative to the method proposed by Owen and Li which is based on the Slifker and Shapiro criterion. The application of the starship procedure to a simulated
sample shows that the simulated annealing algorithm inserted in the procedure supplies results better than the results obtained with the Slifker and Shapiro criterion. Moreover the problems of convergence that occur with traditional optimization methods are not present
Copula Component Analysis for Dependence Modelling
A copula function can be employed to decompose the information content of a multivariate distribution into marginal and dependence components, with the latter quantified by the mutual infromation. From this statement, it is possible to state that a link between infromation and copula theories is valid. On the basis of these results, in the paper we show as it is possibleto use the independent component analysis to estimate the mutual information of a multivariate random and, then, to select the model of copula which better interprets the dependence in sample data
Copula component analysis for dependence modelling
A copula function can be employed to decompose the information content of a multivariate distribution into
marginal and dependence components, with the latter quantified by the mutual information. From this statement,
it is possible to state that a link between information and copula theories is valid. On the basis of these results, in the paper we show as it is possibile to use the independent component analysis to estimate the mutual information of a multivariate random sample and, then, to select the model of copula which better interprets the dependence in sample data
GSH dependence modeling with an application to risk management
The generalized secant hyperbolic distribution (GSH) can be used to represent financial data with heavy tails as an alternative to the Student-t, because it guarantees the existence of all moments, also with a high kurtosis value. In order to obtain a multivariate extension of the GSH distribution, in this article we present two approaches to model the dependence, the copula approach and independent component analysis. Since the methodologies considered allow to simulate the GSH dependence, we show also the empirical results obtained in the estimation of risk of a financial portfolio by the Monte Carlo method
Estimating distribution functions in Johnson translation system by the Starship Procedure with simulated annealing
The computer intensive starship procedure byOwenallows to obtain the best transformation to normality using the global optimization of some measure of non-normality. In this paper, we propose to apply the procedure to estimate a cumulative distribution function in the Johnson translation system by means of the optimization of sampling statistics derived by the minimum distance and non-linear least squares methods. As global
optimization method we consider a stochastic optimization method, specifically the simulated annealing, as an alternative to the method proposed by Owen and Li which is based on
the Slifker and Shapiro criterion. The application of the starship procedure to a simulated sample shows that the simulated annealing algorithm inserted in the procedure supplies
results better than the results obtained with the Slifker and Shapiro criterion. Moreover the problems of convergence that occur with traditional optimization methods are not present
Modelling Multivariate Volatility Processes Using Temporal Independent Component Analysis
Forecasting temporal dependence in second order moments of returns is a relevant problem in many contexts of financial econometrics. It is commonly accepted that financial volatilities move together over time across assets and markets. For this reason in this paper we propose an approach based on the analysis of independent temporal components to model the multivariate volatility. We have assumed that the underlying factors or sources of the model are AR-APARCH processes with errors interpreted by the Meixner distribution. An application with two sets of real data shows the use of the model in the analysis of parallel financial series
I fattori critici di successo della ristorazione ospedaliera nell’area vasta toscana ESTAV: la customer satisfaction
Gli studi relativi alla customer satisfaction in ambito ristorativo/medico hanno avuto inizio negli anni ’50 negli Stati Uniti e, nel tempo, la soddisfazione dell’utente è stata considerata sempre più un vero e proprio attributo della qualità dell’assistenza medica e, solo conseguentemente, del livello di soddisfazione del paziente. In Italia con la riforma Brunetta anche l’erogazione del servizio di ristorazione ospedaliera è oggetto di analisi statistico-aziendali orientate alla soddisfazione del cliente. Si tende così alla “relazionalità consapevole”, ossia al tentativo di instaurare un legame tra amministrazione e cittadino fondato su una interazione paritetica. Ne deriva la possibilità di creare un’organizzazione proattiva, cioè capace di automodificarsi in base agli input che riceve dai risultati dell’indagine dato che i pareri dell’utenza rilevati dall’indagine dovrebbero essere utilizzati come input per avviare azioni di miglioramento, volte ad adeguare gli standard di qualità dell’offerta alle aspettative dell’utenza e, più in generale, a costruire un modello basato sulla capacità di apportare interventi correttivi in base alle esigenze espresse dai pazienti. Una rilevazione di customer satisfaction può rappresentare, infatti, il mezzo più appropriato per raccogliere informazioni sia sulle aspettative dei clienti (in base ai servizi che gli vengono forniti) sia sulle percezioni dei clienti stessi (in base alle prestazioni ricevute). Per questo motivo abbiamo condotto questa indagine nei presidi ospedalieri dell’area Toscana Estav (Arezzo, Grosseto e Siena) al fine di fornire agli organi competenti delle direttrici per il miglioramento della qualità della ristorazione ospedaliera durante lo svolgimento dell’appalto e per poter monitorare nel tempo le eventuali criticità del servizio di ristorazione in base ai giudizi espressi dai pazienti
Aggregation of Dependent Risk Using the Koelher-Symanowski Copula Function
This study examines the Koehler and Symanovski copula function with specific marginals, such as the skew Student-t, the skew generalized secant hyperbolic, and the skew generalized exponential power distributions, in modelling financial returns and measuring dependent risks. The copula function can be specified by adding interaction terms to the cumulative distribution function for the case of independence. It can also be derived using a particular transformation of independent gamma functions. The advantage of using this distribution relative to others lies in its ability to model complex dependence structures among subsets of marginals, as we show for aggregate dependent risks of some
market indices
Maximum Likelihood Estimation of the APARCH Model with Skew Generalized Distribution for the Innovation Process
A method normally used in empirical financial studies to estimate the
parameters of a general autoregressive conditional heteroskedasticity model
is the quasi-maximum likelihood, which maximizes the likelihood function
assuming conditional normality, also if it can be a false assumption. When
it is possible to assume a nonnormal distribution of errors for this kind of
models, it has been shown that there is a loss of efficiency of quasi-maximum
likelihood estimators in finite samples with respect to maximum likelihood
estimators. In this paper we study, with an empirical application to the
daily returns of NASDAQ stock market index, the maximum likelihood es-
timates of the parameters of the asymmetric power ARCH model, a gener-
alization of the general autoregressive conditional heteroskedasticity model,
with skew distributions for the innovation process. The distributions con-
sidered are the Student-t, the exponential power and the generalized secant
hyperbolic distributions, with reparametrization of the densities which adds
inverse scale factors in positive and negative orthants in order to take the
skewness into account. For comparison, we have analyzed the daily returns
also with the quasi-maximum and the semiparametric maximum likelihood
estimation procedures. We have used a quasi-Newton algorithm to optimize
the average log-likelihood functions, in which analytical derivatives of the
parameters have been obtained by MathStatica, a package of the computer
algebra system Mathematica