2,755 research outputs found

    Monte Carlo modified profile likelihood in models for clustered data

    Get PDF
    The main focus of the analysts who deal with clustered data is usually not on the clustering variables, and hence the group-specific parameters are treated as nuisance. If a fixed effects formulation is preferred and the total number of clusters is large relative to the single-group sizes, classical frequentist techniques relying on the profile likelihood are often misleading. The use of alternative tools, such as modifications to the profile likelihood or integrated likelihoods, for making accurate inference on a parameter of interest can be complicated by the presence of nonstandard modelling and/or sampling assumptions. We show here how to employ Monte Carlo simulation in order to approximate the modified profile likelihood in some of these unconventional frameworks. The proposed solution is widely applicable and is shown to retain the usual properties of the modified profile likelihood. The approach is examined in two instances particularly relevant in applications, i.e. missing-data models and survival models with unspecified censoring distribution. The effectiveness of the proposed solution is validated via simulation studies and two clinical trial applications

    Set identification with Tobin regressors

    Get PDF
    We give semiparametric identification and estimation results for econometric models with a regressor that is endogenous, bound censored and selected,called a Tobin regressor. First, we show that true parameter value is set identified and characterize the identification sets. Second, we propose novel estimation and inference methods for this true value. These estimation and inference methods are of independent interest and apply to any problem where the true parameter value is point identified conditional on some nuisance parameter values that are set-identified. By fixing the nuisance parameter value in some suitable region, we can proceed with regular point and interval estimation. Then, we take the union over nuisance parameter values of the point and interval estimates to form the final set estimates and confidence set estimates. The initial point or interval estimates can be frequentist or Bayesian. The final set estimates are set-consistent for the true parameter value, and confidence set estimates have frequentist validity in the sense of covering this value with at least a prespecified probability in large samples. We apply our identification, estimation, and inference procedures to study the effects of changes in housing wealth on household consumption. Our set estimates fall in plausible ranges, significantly above low OLS estimates and below high IV estimates that do not account for the Tobin regressor structure.

    Designing incenttives in local public utilities, an international comparison of the drinking water sector

    Get PDF
    Direct and indirect standardization procedures aim at comparing differences in health or differences in health care expenditures between subgroups of the population after controlling for observable morbidity differences. There is a close analogy between this problem and the issue of risk adjustment in health insurance. We analyse this analogy within the theoretical framework proposed in the recent social choice literature on responsibility and compensation. Traditional methods of risk adjustment are analogous to indirect standardization. They are equivalent to the so-called conditional egalitarian mechanism in social choice. In general, they do not remove incentives for risk selection, even if the effect of non-morbidity variables is correctly taken into account. A method of risk adjustment based on direct standardization (as proposed for Ireland) does remove the incentives for risk selection, but at the cost of violating a neutrality condition, stating that insurers should receive the same premium subsidy for all members of the same risk group. Direct standardization is equivalent to the egalitarianequivalent (or proportional) mechanism in social choice. The conflict between removing incentives for risk selection and neutrality is unavoidable if the health expenditure function is not additively separable in the morbidity and efficiency variables.

    The Importance of Being Clustered: Uncluttering the Trends of Statistics from 1970 to 2015

    Full text link
    In this paper we retrace the recent history of statistics by analyzing all the papers published in five prestigious statistical journals since 1970, namely: Annals of Statistics, Biometrika, Journal of the American Statistical Association, Journal of the Royal Statistical Society, series B and Statistical Science. The aim is to construct a kind of "taxonomy" of the statistical papers by organizing and by clustering them in main themes. In this sense being identified in a cluster means being important enough to be uncluttered in the vast and interconnected world of the statistical research. Since the main statistical research topics naturally born, evolve or die during time, we will also develop a dynamic clustering strategy, where a group in a time period is allowed to migrate or to merge into different groups in the following one. Results show that statistics is a very dynamic and evolving science, stimulated by the rise of new research questions and types of data

    Designing incentives in local public utilities, an international comparison of the drinking water sector.

    Get PDF
    Cross-country comparisons avoid the unsteady equilibrium in which regulators have to balance between economies of scale and a sufficient number of remaining comparable utilities. By the use of Data Envelopment Analysis (DEA), we compare the efficiency of the drinking water sector in the Netherlands, England and Wales, Australia, Portugal and Belgium. After introducing a procedure to measure the homogeneity of an industry, robust order-m partial frontiers are used to detect outlying observations. By applying bootstrapping algorithms, bias-corrected first and second stage results are estimated. Our results suggest that incentive regulation in the sense of regulatory and benchmark incentive schemes have a significant positive effect on efficiency. By suitably adapting the conditional efficiency measures to the bias corrected estimates, we incorporate environmental variables directly into the efficiency estimates. We firstly equalize the social, physical and institutional environment, and secondly, deduce the effect of incentive schemes on utilities as they would work under similar conditions. The analysis demonstrates that in absence of clear and structural incentives the average efficiency of the utilities falls in comparison with utilities which are encouraged by incentives.Business; Economics; Efficiency; Management; Research; Research in economics; University; University-research; Incentives; Utilities; International; Sector;

    Two-step semiparametric empirical likelihood inference

    Get PDF
    In both parametric and certain nonparametric statistical models, the empirical likelihood ratio satisfies a nonparametric version of Wilks’ theorem. For many semiparametric models, however, the commonly used two-step (plug-in) empirical likelihood ratio is not asymptotically distribution-free, that is, its asymptotic distribution contains unknown quantities and hence Wilks’ theorem breaks down. This article suggests a general approach to restore Wilks’ phenomenon in two-step semiparametric empirical likelihood inferences. The main insight consists in using as the moment function in the estimating equation the influence function of the plug-in sample moment. The proposed method is general; it leads to a chi-squared limiting distribution with known degrees of freedom; it is efficient; it does not require undersmoothing; and it is less sensitive to the first-step than alternative methods, which is particularly appealing for high-dimensional settings. Several examples and simulation studies illustrate the general applicability of the procedure and its excellent finite sample performance relative to competing methods

    Determinants of Heritage Authorities’ Performance: An exploratory study with DEA bootstrapping approach

    Get PDF
    Government regulation plays a significant role in the field of heritage conservation. Namely, regulation is aimed at controlling the stock of heritage, restricting or modifying the activities of public as well as private actors. Surprisingly, the literature has neither extensively investigated the performance of the heritage authorities involved in the implementation of conservation policies nor its determinants. In this paper we address this issue, from a theoretical as well as an empirical perspective, using Sicily as a case study. More precisely, we analyze the determinants of the differences in the efficiency levels of conservation activity of the nine Sicilian heritage authorities over the period 1993-2005. Economic and managerial variables are used to distinguish non-discretionary from discretionary causes. The results show that the efficiency scores seem to be only affected by economic factors whereas the managerial variables do not affect the performance of heritage authorities.Heritage regulation, cultural policy, efficiency analysis.

    The single-index hazards model

    Get PDF
    We first propose the single-index hazards model for right censored survival data. As an extension of the Cox model, this model allows nonparametric modeling of covariate effects in a parsimonious way via a single-index. In addition, the relative importance of covariates can be assessed via this model. We consider the conventional profile-kernel method based on the local likelihood for model estimation. It is shown that this method may give consistent estimation under certain restrictive conditions, but in general it can yield biased estimation. Simulation studies are conducted to demonstrate the bias phenomena. The existence and nature of the failure of this commonly used approach is somewhat surprising. The interpretation of covariate effects in the aforementioned single-index hazards model is difficult. Thus, we further propose the partly proportional single-index hazards model in which the effect of covariates of primary interest is represented by the regression parameter while "nuisance" covariates can have nonparametric effect on the survival time. We again consider the conventional profile-kernel method and it leads to biased estimation as well. A bias correction method is then proposed and the corrected profile local likelihood estimators are shown to be consistent, asymptotically normal and semiparametrically efficient. We evaluate the finite-sample properties of our estimators through simulation studies and illustrate the proposed model and method with an application to a dataset from the Multicenter AIDS Cohort Study (MACS). Besides the profile-kernel method, we also study the profile stratified likelihood method based on stratification of the single-index. In the single-index hazards model, this method may give consistent estimation under the restrictive "independent censoring" condition, but in general it can yield biased estimation. Simulation studies are conducted to demonstrate the situations in which the bias phenomena do (or do not) exist; In the partly proportional single-index hazards model, we demonstrate numerically the existence of the bias and then propose a bias correction method. The estimators from the corrected profile stratified likelihood method are shown to be consistent. Their finite-sample properties are evaluated through simulation studies. The corrected profile stratified method is applied to the aforementioned MACS study for illustration
    corecore