19 research outputs found
Using Simulation-based Inference with Panel Data in Health Economics
Panel datasets provide a rich source of information for health economists, offering the scope to control for individual heterogeneity and to model the dynamics of individual behaviour. However the qualitative or categorical measures of outcome often used in health economics create special problems for estimating econometric models. Allowing a flexible specification of the autocorrelation induced by individual heterogeneity leads to models involving higher order integrals that cannot be handled by conventional numerical methods. The dramatic growth in computing power over recent years has been accompanied by the development of simulation-based estimators that solve this problem. This review uses binary choice models to show what can be done with conventional methods and how the range of models can be expanded by using simulation methods. Practical applications of the methods are illustrated using data on health from the British Household Panel Survey (BHPS).
Methods for generating variates from probability distributions
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Diverse probabilistic results are used in the design of random univariate generators. General methods based on these are classified and relevant theoretical properties derived. This is followed by a comparative review of specific algorithms currently available for continuous and discrete univariate distributions. A need for a Zeta generator is established, and two new methods, based on inversion and rejection with a truncated Pareto envelope respectively are developed and compared. The paucity of algorithms for multivariate generation motivates a classification of general methods, and in particular, a new method involving envelope rejection with a novel target distribution is proposed. A new method for generating first passage times in a Wiener Process is constructed. This is based on the ratio of two random numbers, and its performance is compared to an existing method for generating inverse Gaussian variates. New "hybrid" algorithms for Poisson and Negative Binomial distributions are constructed, using an Alias implementation, together with a Geometric tail procedure. These are shown to be robust, exact and fast for a wide range of parameter values. Significant modifications are made to Atkinson's Poisson generator (PA), and the resulting algorithm shown to be complementary to the hybrid method. A new method for Von Mises generation via a comparison of random numbers follows, and its performance compared to
that of Best and Fisher's Wrapped Cauchy rejection method. Finally new methods are proposed for sampling from distribution tails, using optimally designed Exponential envelopes. Timings are given for Gamma and Normal tails, and in the latter case the performance is shown to be significantly better than Marsaglia's tail generation procedure.Governors of Dundee College of Technolog
An Error in the Kinderman-Ramage Method and How to Fix It
An error in the Gaussian random variate generator by Kinderman and Ramage is described that results in the generation of random variates with an incorrect distribution. An additional statement that corrects the original algorithm is given.Series: Preprint Series / Department of Applied Statistics and Data Processin
Recommended from our members
Robust estimation for the mean of skewed distributions
Common estimators of the mean g(θ) = Jx dFθ(x) in skewed distribution models may be sensitive to contamination by a few large observations. It is then desirable to consider robust estimators g(θ) .The approach of Hampel (1968), who defines an estimator θ for the parameter vector θ to be optimal B-robust if it is asymptotically efficient subject to a given upper bound on the norm of its influence function, is used to construct optimal robust estimators for g(θ) . An estimator g(θ) is defined to be functional invariant when it preserves the robustness and optimality properties of a robust estimator θ . The invariance of the optimal B-robust estimators are used to construct optimal B-robust estimator for the mean of multi-parameter distributions. An algorithm for computing the optimal B-robust score function for any distribution is developed. An optimal B-robust L-estimator for the location-scale family is also constructed. Asymptotic relative efficiencies of the optimal B-robust estimators for the mean of the lognormal and Weibull distributions are computed and compared with those for several other robust and nonrobust estimators. Type II censoring is considered as a method to achieve B-robustness. The optimal proportion of trimming is defined as that proportion which produces the smallest asymptotic MSE in the class of censored data estimators subject to some upper bound on the influence function. Several common estimators for censored data, including the maximum likelihood, a modified maximum likelihood [Tiku et al., 1986] and an L-estimator [Chernoff, et al. ,1967], are shown to have larger MSE than the optimal B-robust estimator with the same upper bound on the influence function. The optimal proportions of trimming are computed for the MLE and L-estimator of the mean of lognormal and Weibull distributions. A simulation study of nine estimators for the mean of a lognormal distribution shows that the optimal B-robust estimator has the smallest MSE for the sample size and contamination cases considered. All B-robust estimators considered are found to be better than the nonrobust ones with regard to both MSE and bias
Stochastic models of exchange-rate dynamics and their implications for the pricing of foreign-currency options.
The aim of this study is to find a suitable approach to model econometrically exchange-rate dynamics. In the first chapter, I examine the empirical properties of four exchange rates. The data used are daily, weekly, monthly and quarterly exchange rates of the German mark, the British pound, the Swiss franc, and the Japanese yen against the U.S. dollar from July 1974 to December 1987.1 study the moment properties and time-series properties of these exchange rates and find in daily and weekly data leptokurtosis and heteroskedasticity. On the other hand, the hypotheses of no serial correlation, of a constant mean of zero, and of a symmetric distribution cannot be rejected. The fact that the daily and weekly data are not strictly equi-distant does not have a strong impact on these empirical regularities. In chapter 2, static distributional models (mixture of distributions, compound Poisson process, Student distribution, and stable Paretian distributions) are estimated. Chi-squared goodness-of-fit tests reject these models. Direct inferential evidence against stable distributions is found by estimating the characteristic exponent by FFT and by estimating the exponent of regularly varying tails. In chapter 3, dynamic models of heteroskedasticity (ARCH and Markov-switching models) are introduced. Quite satisfactory results are obtained for the EGARCH model and the Markov-switching model whereas the ARCH, GARCH and GARCH-t models are in conflict with stationarity conditions for the variance. Chapter 4 compares the static and dynamic models with respect to goodness-of-fit and forecasting performance. With respect to goodness-of-fit criteria, the dynamic models appear to be superior to the static models. Furthermore, the dynamic models outperform a naive model of constant variance with respect to unbiasedness but not with respect to precision. Chapter 5 studies the option-price implications of the static and dynamic models. The spot-rate effects of static models are rather small and they disappear, as expected, under temporal aggregation. GARCH and EGARCH models, on the other hand, imply higher option prices compared to Black-Scholes option prices along the whole spectrum of moneyness. Only the Markov-switching model is compatible with observed smile effects
Recommended from our members
Testing for location after transformation to normality
In the problem of testing the median using a random sample from a
certain distribution, and if no other parametric family is suggested,
the t-test is known to be the optimal procedure when this distribution
is normal. If the sample appears to be non-normal, one has the choice
either to consider a non-parametric approach or to try to correct for
non-normality before applying the t-test.
In this thesis we investigate the effect of applying certain power
transformations as an action to correct for non-normality before
applying the t-test. Also we investigate the effect of applying a
power transformation then trimming a certain proportion from the data
on each tail as a double action to correct for non-normality. This
problem is first considered by Doksum and Wong (1983), who apply the
Box-Cox power transformations to positive, right-skewed data when
testing for the equality of distributions of two independent samples.
In the present work we provide results for the one-sample case
using two alternatives to the Box-Cox power family which are applicable
to all data sets. Whenever it can be assumed that the data is a random
sample from a symmetric distribution with heavy tails, it is shown that
the John-Draper family of modtlus power transformations, with the
transformation parameter being positive and smaller than 1 , is
appropriate to correct for non-normality and the t-test based on the
transformed data is asymptotically more efficient and has better power
properties than the t-test based on the data in its original scale.
-When the data is thought to have a skewed distribution and can assume
negative as well as positive values, a new family of transformations,
referred to as the two-domain family, is introduced. It is shown that
the t-test based on the data after applying this new transformation is
also asymptotically more efficient and has better power properties than
the t-test in the original scale. A simulation study shows that
trimming a certain proportion on each tail of the data transformed by
one of the above two transformations then applying the t-test to the
trimmed samples yields a considerable gain in power compared to the
t-test in the original scale
Recommended from our members
Comparative philology, French music, and the composition of Indo-Europeanism from FĂŠtis to Messiaen.
This thesis argues that the disciplines of comparative philology and linguistics exerted significant force on the priorities and techniques of musicologists and composers in fin-de-siècle France, and examines how ideologies of Indo-Europeanism (or aryanism), concomitant with comparative philology, generated efforts to âsound outâ Indo-Europeanism in music. Using a relational approach, dense interdisciplinary networks of philologists/linguists, musicologists, and composers are reconstructed to demonstrate how musicological appropriations of linguistic research reverberated in musical composition right through the 1950s. These contexts reveal how wide-ranging repertories emerged from ethnic-nationalist projects of reclaiming Indo-European âpatrimonyâ.
The thesis is in two Parts. Part I, âPhilologie comparĂŠe, musicologie, and Indo-European hypothesesâ, is organised around four overlapping intellectual networks comprising comparative philologists and musicologists. Francophone musicologistsâ efforts to model their discipline on that of comparative philology are surveyed. Scholars discussed include FĂŠtis, Gevaert, Bourgault-Ducoudray, Burnouf, Meillet, Aubry, Emmanuel, and Grosset. Arguments concerning the place of music between concepts of âlanguageâ and âraceâ are retraced, with special attention paid to musicologistsâ efforts to pinpoint quasi-morphological âIndo-Europeanâ musical structures â in particular, âmodesâ and âmetresâ â construed as âessentialâ and âancestralâ.
Part II, âComposing with philology: performances of authenticity and innovationâ, describes how the intellectual project elaborated in Part I infiltrated compositional practices. Close musical and paratextual readings show how composers legitimated experimentalism through âperformancesâ of philological âauthenticityâ. Over time, musical parameters such as modes and metres are abstracted and assimilated into compositional lexicons. Composers discussed include Bourgault-Ducoudray, Saint-SaĂŤns, SĂŠverac, Roussel, and Emmanuel. This root system flourishes in the music of Olivier Messiaen, whose rhythmic technique is revisited in light of manuscript materials. From his borrowings of early Indian metres (deĹÄŤtÄlas) through his hyperformalist âMode de valeurs et dâintensitĂŠsâ, Messiaenâs rhythmic style is radically reinterpreted as a logical extension of francophone musicologyâs disciplinary and epistemological inheritance from comparative philology.Gates Cambridge Scholarshi
Empirically derived methods for analysing simulation model output.
Often in simulation procedures are not proposed unless they are supported by a strong mathematical background. As will be shown in this thesis, this approach does not always give good results when the procedures are applied to complex simulation models, especially on output analysis. For this reason we have used an empirical rather than a theoretical approach for dealing with some of the output problems of simulation. The research carried out has dealt mainly with queuing networks. The first problem we address is that of the identification of possible unstable queues. We also deal with the problem of the identification of queues that may require a long simulation run length to reach the steady state. The method of replications is used for the estimation of terminating and sometimes of steady state parameters. In this thesis we study the relationship that exists between the number of replications used in the simulation and the simulation run length required for the parameter being estimated to reach the steady state. We also study the influence of the random number streams on the values of the mean estimates as a function of the number of replications. One of the most commonly discussed problems related to the estimation of steady state parameters is that of the initialisation bias problem. Two methods are proposed in this thesis to deal with this problem. In one of the methods we propose an effective procedure that can be used for the estimation of the number of initial observations that are to be deleted. The second method, is based on a basic forecasting technique called weighted averages and does not require the elimination of any of the initial observations. Another topic that has been studied in this thesis is the batch means method which is employed for the estimation of steady state parameters based on a single but very long simulation run. We show how a new sampling method called Descriptive Sampling is well suited for the estimation of steady state parameters with the batch means method. We also show how some of the procedures proposed in the literature for use in the batch means method do not work well in simulation models for which no analytical answer exists. The thesis demonstrates that empirically derived methods can be practically effective and could form future theoretical research