69 research outputs found
Outlier detection in GARCH models
We present a new procedure for detecting multiple additive outliers in GARCH(1,1) models at unknown dates. The outlier candidates are the observations with the largest standardized residual. First, a likelihood-ratio based test determines the presence and timing of an outlier. Next, a second test determines the type of additive outlier (volatility or level). The tests are shown to be similar with respect to the GARCH parameters. Their null distribution can be easily approximated from an extreme value distribution, so that computation of "p"-values does not require simulation. The procedure outperforms alternative methods, especially when it comes to determining the date of the outlier. We apply the method to returns of the Dow Jones index, using monthly, weekly, and daily data. The procedure is extended and applied to GARCH models with Student-"t" distributed errors
Statistical Algorithms for Models in State Space Using SsfPack 2.2
This paper discusses and documents the algorithms of SsfPack 2.2. SsfPack is a suite of C routines for carrying out computations involving the statistical analysis of univariate and multivariate models in state space form. The emphasis is on documenting the link we have made to the Ox computing environment. SsfPack allows for a full range of different state space forms: from a simple time-invariant model to a complicated time-varying model. Functions can be used which put standard models such as ARIMA and cubic spline models in state space form. Basic functions are available for filtering, moment smoothing and simulation smoothing. Ready-to-use functions are provided for standard tasks such as likelihood evaluation, forecasting and signal extraction. We show that SsfPack can be easily used for implementing, fitting and analysing Gaussian models relevant to many areas of econometrics and statistics. Some Gaussian illustrations are given.Kalman filtering and smoothing;Markov chain Monte Carlo;Ox;simulation smoother;state space
Position Bias in Best-Worst Scaling Surveys: A Case Study on Trust in Institutions
This paper investigates the effect of items' physical position in the best-worst scaling technique. Although the best-worst scaling technique has been widely used in many fields, the literature has largely overlooked the phenomenon of consumers' adoption of processing strategies while making their best-worst choices. We examine this issue in the context of consumers' trust in institutions to provide information about a new food technology, nanotechnology, and its use in food processing. Our results show that approximately half of the consumers used position as a schematic cue when making choices. We find the position bias was particularly strong when consumers chose their most trustworthy institution compared to their least trustworthy institution. In light of our findings, we recommend that researchers in the field be aware of the possibility of position bias when designing best-worst scaling surveys. We also encourage researchers who have already collected best-worst data to investigate whether their data shows such heuristics
Making use of respondent reported processing information to understand attribute importance: a latent variable scaling approach
In recent years we have seen an explosion of research seeking to understand the role that rules and heuristics might play in improving the predictive capability of discrete choice models, as well as delivering willingness to pay estimates for specific attributes that may (and often do) differ significantly from estimates based on a model specification that assumes all attributes are relevant. This paper adds to that literature in one important way—it explicitly recognises the endogeneity issues raised by typical attribute non-attendance treatments and conditions attribute parameters on underlying unobserved attribute importance ratings. We develop a hybrid model system involving attribute processing and outcome choice models in which latent variables are introduced as explanatory variables in both parts of the model, explaining the answers to attribute processing questions and explaining heterogeneity in marginal sensitivities in the choice model. The resulting empirical model explains how lower latent attribute importance leads to a higher probability of indicating that an attribute was ignored or that it was ranked as less important, as well as increasing the probability of a reduced value for the associated marginal utility coefficient in the choice model. The model does so by treating the answers to information processing questions as dependent rather than explanatory variables, hence avoiding potential risk of endogeneity bias and measurement error
Econometric computing
SIGLEAvailable from British Library Document Supply Centre- DSC:D187488 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
- …