53,392 research outputs found

    The Importance of Being Clustered: Uncluttering the Trends of Statistics from 1970 to 2015

    Full text link
    In this paper we retrace the recent history of statistics by analyzing all the papers published in five prestigious statistical journals since 1970, namely: Annals of Statistics, Biometrika, Journal of the American Statistical Association, Journal of the Royal Statistical Society, series B and Statistical Science. The aim is to construct a kind of "taxonomy" of the statistical papers by organizing and by clustering them in main themes. In this sense being identified in a cluster means being important enough to be uncluttered in the vast and interconnected world of the statistical research. Since the main statistical research topics naturally born, evolve or die during time, we will also develop a dynamic clustering strategy, where a group in a time period is allowed to migrate or to merge into different groups in the following one. Results show that statistics is a very dynamic and evolving science, stimulated by the rise of new research questions and types of data

    Spike-and-Slab Priors for Function Selection in Structured Additive Regression Models

    Full text link
    Structured additive regression provides a general framework for complex Gaussian and non-Gaussian regression models, with predictors comprising arbitrary combinations of nonlinear functions and surfaces, spatial effects, varying coefficients, random effects and further regression terms. The large flexibility of structured additive regression makes function selection a challenging and important task, aiming at (1) selecting the relevant covariates, (2) choosing an appropriate and parsimonious representation of the impact of covariates on the predictor and (3) determining the required interactions. We propose a spike-and-slab prior structure for function selection that allows to include or exclude single coefficients as well as blocks of coefficients representing specific model terms. A novel multiplicative parameter expansion is required to obtain good mixing and convergence properties in a Markov chain Monte Carlo simulation approach and is shown to induce desirable shrinkage properties. In simulation studies and with (real) benchmark classification data, we investigate sensitivity to hyperparameter settings and compare performance to competitors. The flexibility and applicability of our approach are demonstrated in an additive piecewise exponential model with time-varying effects for right-censored survival times of intensive care patients with sepsis. Geoadditive and additive mixed logit model applications are discussed in an extensive appendix

    Nonparametric frontier estimation from noisy data

    Get PDF
    A new nonparametric estimator of production frontiers is defined and studied when the data set of production units is contaminated by measurement error. The measurement error is assumed to be an additive normal random variable on the input variable, but its variance is unknown. The estimator is a modification of the m-frontier, which necessitates the computation of a consistent estimator of the conditional survival function of the input variable given the output variable. In this paper, the identification and the consistency of a new estimator of the survival function is proved in the presence of additive noise with unknown variance. The performance of the estimator is also studied through simulated data.production frontier, deconvolution, measurement error, efficiency analysis

    Computational Models for Transplant Biomarker Discovery.

    Get PDF
    Translational medicine offers a rich promise for improved diagnostics and drug discovery for biomedical research in the field of transplantation, where continued unmet diagnostic and therapeutic needs persist. Current advent of genomics and proteomics profiling called "omics" provides new resources to develop novel biomarkers for clinical routine. Establishing such a marker system heavily depends on appropriate applications of computational algorithms and software, which are basically based on mathematical theories and models. Understanding these theories would help to apply appropriate algorithms to ensure biomarker systems successful. Here, we review the key advances in theories and mathematical models relevant to transplant biomarker developments. Advantages and limitations inherent inside these models are discussed. The principles of key -computational approaches for selecting efficiently the best subset of biomarkers from high--dimensional omics data are highlighted. Prediction models are also introduced, and the integration of multi-microarray data is also discussed. Appreciating these key advances would help to accelerate the development of clinically reliable biomarker systems

    Mortality modelling and forecasting: a review of methods

    Get PDF

    Hedonic Prices Indexes for New Passenger Cars in Portugal (1997- 2001)

    Get PDF
    This paper evaluates the effects of quality change on the price index for new passenger cars in Portugal for the years 1997-2001. Hedonic regression models are studied, giving particular emphasis to the relation between the form of the price index and the specification of the hedonic equation and estimation method used. It is argued that when log-linear hedonic functions are used the effects of quality change should be evaluated using a method akin to the Oaxaca decomposition (Oaxaca R., 1973, "Male-Female Wage Differentials in Urban Labor Markets", International Economic Review, 14, 693-709), rather than using the traditional dummy variables method. The results of the empirical part of the paper indicate that the CPI component corresponding to the sales of new passenger cars may have been overestimated by as much as 2.2 percentage points per year. This corresponds to an overestimation of the overall CPI by about 0.15 percentage points per year. As a by- product of this analysis it is also possible to conclude that the quality of new cars sold in Portugal increased on average 4.8 percent per year during this period.CPI bias; Heteroskedasticity; Oaxaca decomposition.

    Competition and Firm Performance: Lessons from Russia

    Full text link
    D24, D4, J42, L1, L33, P23, P31The "big-bang" liberalization of the inefficient Russian economy in 1992 provides a fruitful setting for analyzing the impact of several dimensions of market competition and other factors on enterprise efficiency. We analyze 1992-1998 panel data on 14,961 enterprises covering 75 percent of industrial employment, emphasizing the varied sources, geographic scope, intensity, time path, and survival effects of competitive pressures. We find large, positive effects on TFP from competition in domestic product and local labor markets, and from imports and better transportation infrastructure, although the first effect appears only gradually. Non-state firms outperform state enterprises, even after correction for selection bias.http://deepblue.lib.umich.edu/bitstream/2027.42/39680/3/wp296.pd
    corecore