66,846 research outputs found

    A Simple Test for the Absence of Covariate Dependence in Hazard Regression Models

    Get PDF
    This paper extends commonly used tests for equality of hazard rates in a two-sample or k-sample setup to a situation where the covariate under study is continuous. In other words, we test the hypothesis that the conditional hazard rate is the same for all covariate values, against the omnibus alternative as well as more specific alternatives, when the covariate is continuous. The tests developed are particularly useful for detecting trend in the underlying conditional hazard rates or changepoint trend alternatives. Asymptotic distribution of the test statistics are established and small sample properties of the tests are studied. An application to the e¤ect of aggregate Q on corporate failure in the UK shows evidence of trend in the covariate e¤ect, whereas a Cox regression model failed to detect evidence of any covariate effect. Finally, we discuss an important extension to testing for proportionality of hazards in the presence of individual level frailty with arbitrary distribution

    A Simple Test for the Absence of Covariate Dependence in Hazard Regression Models

    Get PDF

    A Simple Test for the Absence of Covariate Dependence in Hazard Regression Models

    Get PDF
    This paper extends commonly used tests for equality of hazard rates in a two-sample or k-sample setup to a situation where the covariate under study is continuous. In other words, we test the hypothesis that the conditional hazard rate is the same for all covariate values, against the omnibus alternative as well as more specific alternatives, when the covariate is continuous. The tests developed are particularly useful for detecting trend in the underlying conditional hazard rates or changepoint trend alternatives. Asymptotic distribution of the test statistics are established and small sample properties of the tests are studied. An application to the e¤ect of aggregate Q on corporate failure in the UK shows evidence of trend in the covariate e¤ect, whereas a Cox regression model failed to detect evidence of any covariate effect. Finally, we discuss an important extension to testing for proportionality of hazards in the presence of individual level frailty with arbitrary distribution.Covariate dependence; Continuous covariate; Two-sample tests; Trend tests; Proportional hazards; Frailty/ unobserved heterogeneity; Linear transformation model

    Testing for monotone increasing hazard rate

    Full text link
    A test of the null hypothesis that a hazard rate is monotone nondecreasing, versus the alternative that it is not, is proposed. Both the test statistic and the means of calibrating it are new. Unlike previous approaches, neither is based on the assumption that the null distribution is exponential. Instead, empirical information is used to effectively identify and eliminate from further consideration parts of the line where the hazard rate is clearly increasing; and to confine subsequent attention only to those parts that remain. This produces a test with greater apparent power, without the excessive conservatism of exponential-based tests. Our approach to calibration borrows from ideas used in certain tests for unimodality of a density, in that a bandwidth is increased until a distribution with the desired properties is obtained. However, the test statistic does not involve any smoothing, and is, in fact, based directly on an assessment of convexity of the distribution function, using the conventional empirical distribution. The test is shown to have optimal power properties in difficult cases, where it is called upon to detect a small departure, in the form of a bump, from monotonicity. More general theoretical properties of the test and its numerical performance are explored.Comment: Published at http://dx.doi.org/10.1214/009053605000000039 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Testing for the Monotone Likelihood Ratio Assumption

    Get PDF
    Monotonicity of the likelihood ratio for conditioned densities is a common technical assumption in economic models. But we have found no empirical tests for its plausibility. This paper develops such a test based on the theory of order-restricted inference, which is robust with respect to the correlation structure of the distributions being compared. We apply the test to study the technology revealed by agricultural production experiments. For the data under scrutiny, the results support the assumption of the monotone likelihood ratio. In a second application, we find some support for the assumption of affiliation among bids cast in a multiple-round Vickrey auction for a consumption good. Keywords: affiliation, auction, likelihood ratio, order-restricted inference, stochastic order.

    Learning mixtures of structured distributions over discrete domains

    Full text link
    Let C\mathfrak{C} be a class of probability distributions over the discrete domain [n]={1,...,n}.[n] = \{1,...,n\}. We show that if C\mathfrak{C} satisfies a rather general condition -- essentially, that each distribution in C\mathfrak{C} can be well-approximated by a variable-width histogram with few bins -- then there is a highly efficient (both in terms of running time and sample complexity) algorithm that can learn any mixture of kk unknown distributions from C.\mathfrak{C}. We analyze several natural types of distributions over [n][n], including log-concave, monotone hazard rate and unimodal distributions, and show that they have the required structural property of being well-approximated by a histogram with few bins. Applying our general algorithm, we obtain near-optimally efficient algorithms for all these mixture learning problems.Comment: preliminary full version of soda'13 pape

    Combining estimates of interest in prognostic modelling studies after multiple imputation: current practice and guidelines

    Get PDF
    Background: Multiple imputation (MI) provides an effective approach to handle missing covariate data within prognostic modelling studies, as it can properly account for the missing data uncertainty. The multiply imputed datasets are each analysed using standard prognostic modelling techniques to obtain the estimates of interest. The estimates from each imputed dataset are then combined into one overall estimate and variance, incorporating both the within and between imputation variability. Rubin's rules for combining these multiply imputed estimates are based on asymptotic theory. The resulting combined estimates may be more accurate if the posterior distribution of the population parameter of interest is better approximated by the normal distribution. However, the normality assumption may not be appropriate for all the parameters of interest when analysing prognostic modelling studies, such as predicted survival probabilities and model performance measures. Methods: Guidelines for combining the estimates of interest when analysing prognostic modelling studies are provided. A literature review is performed to identify current practice for combining such estimates in prognostic modelling studies. Results: Methods for combining all reported estimates after MI were not well reported in the current literature. Rubin's rules without applying any transformations were the standard approach used, when any method was stated. Conclusion: The proposed simple guidelines for combining estimates after MI may lead to a wider and more appropriate use of MI in future prognostic modelling studies

    Software timing analysis for complex hardware with survivability and risk analysis

    Get PDF
    The increasing automation of safety-critical real-time systems, such as those in cars and planes, leads, to more complex and performance-demanding on-board software and the subsequent adoption of multicores and accelerators. This causes software's execution time dispersion to increase due to variable-latency resources such as caches, NoCs, advanced memory controllers and the like. Statistical analysis has been proposed to model the Worst-Case Execution Time (WCET) of software running such complex systems by providing reliable probabilistic WCET (pWCET) estimates. However, statistical models used so far, which are based on risk analysis, are overly pessimistic by construction. In this paper we prove that statistical survivability and risk analyses are equivalent in terms of tail analysis and, building upon survivability analysis theory, we show that Weibull tail models can be used to estimate pWCET distributions reliably and tightly. In particular, our methodology proves the correctness-by-construction of the approach, and our evaluation provides evidence about the tightness of the pWCET estimates obtained, which allow decreasing them reliably by 40% for a railway case study w.r.t. state-of-the-art exponential tails.This work is a collaboration between Argonne National Laboratory and the Barcelona Supercomputing Center within the Joint Laboratory for Extreme-Scale Computing. This research is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC02- 06CH11357, program manager Laura Biven, and by the Spanish Government (SEV2015-0493), by the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P), by Generalitat de Catalunya (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft
    corecore