3,654 research outputs found
Recommended from our members
Econometrics: A bird's eye view
As a unified discipline, econometrics is still relatively young and has been transforming and expanding very rapidly over the past few decades. Major advances have taken place in the analysis of cross sectional data by means of semi-parametric and non-parametric techniques. Heterogeneity of economic relations across individuals, firms and industries is increasingly acknowledge and attempts have been made to take them into account either by integrating out their effects or by modeling the sources of heterogeneity when suitable panel data exists. The counterfactual considerations that underlie policy analysis and treatment evaluation have been given a more satisfactory foundation. New time series econometric techniques have been developed and employed extensively in the areas of macroeconometrics and finance. Non-linear econometric techniques are used increasingly in the analysis of cross section and time series observations. Applications of Bayesian techniques to econometric problems have been given new impetus largely thanks to advances in computer power and computational techniques. The use of Bayesian techniques have in turn provided the investigators with a unifying framework where the tasks and forecasting, decision making, model evaluation and learning can be considered as parts of the same interactive and iterative process; thus paving the way for establishing the foundation of the "real time econometrics". This paper attempts to provide an overview of some of these developments
Parametric and semi-parametric modelling of vacation expenditures
Consumer Choice;Recreation
Joint modeling of longitudinal drug using pattern and time to first relapse in cocaine dependence treatment data
An important endpoint variable in a cocaine rehabilitation study is the time
to first relapse of a patient after the treatment. We propose a joint modeling
approach based on functional data analysis to study the relationship between
the baseline longitudinal cocaine-use pattern and the interval censored time to
first relapse. For the baseline cocaine-use pattern, we consider both
self-reported cocaine-use amount trajectories and dichotomized use
trajectories. Variations within the generalized longitudinal trajectories are
modeled through a latent Gaussian process, which is characterized by a few
leading functional principal components. The association between the baseline
longitudinal trajectories and the time to first relapse is built upon the
latent principal component scores. The mean and the eigenfunctions of the
latent Gaussian process as well as the hazard function of time to first relapse
are modeled nonparametrically using penalized splines, and the parameters in
the joint model are estimated by a Monte Carlo EM algorithm based on
Metropolis-Hastings steps. An Akaike information criterion (AIC) based on
effective degrees of freedom is proposed to choose the tuning parameters, and a
modified empirical information is proposed to estimate the variance-covariance
matrix of the estimators.Comment: Published at http://dx.doi.org/10.1214/15-AOAS852 in the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Exact and Asymptotic Weighted Logrank Tests for Interval Censored Data: The interval R Package
For right-censored data perhaps the most commonly used tests are weighted logrank tests, such as the logrank and Wilcoxon-type tests. In this paper we review several generalizations of those weighted logrank tests to interval-censored data and present an R package, interval, to implement many of them. The interval package depends on the perm package, also presented here, which performs exact and asymptotic linear permutation tests. The perm package performs many of the tests included in the already available coin package, and provides an independent validation of coin. We review analysis methods for interval-censored data, and we describe and show how to use the interval and perm packages.
Most Likely Transformations
We propose and study properties of maximum likelihood estimators in the class
of conditional transformation models. Based on a suitable explicit
parameterisation of the unconditional or conditional transformation function,
we establish a cascade of increasingly complex transformation models that can
be estimated, compared and analysed in the maximum likelihood framework. Models
for the unconditional or conditional distribution function of any univariate
response variable can be set-up and estimated in the same theoretical and
computational framework simply by choosing an appropriate transformation
function and parameterisation thereof. The ability to evaluate the distribution
function directly allows us to estimate models based on the exact likelihood,
especially in the presence of random censoring or truncation. For discrete and
continuous responses, we establish the asymptotic normality of the proposed
estimators. A reference software implementation of maximum likelihood-based
estimation for conditional transformation models allowing the same flexibility
as the theory developed here was employed to illustrate the wide range of
possible applications.Comment: Accepted for publication by the Scandinavian Journal of Statistics
2017-06-1
Managing uncertainty:financial, actuarial and statistical modelling.
present value; Value; Actuarial;
- …