2,182 research outputs found

    One stage multiple comparisons with the average for exponential location parameters under heteroscedasticity

    Get PDF
    [[abstract]]Two-stage multiple comparisons with the average for location parameters of two-parameter exponential distributions under heteroscedasticity are proposed by Wu and Wu [Wu, S.F., Wu, C.C., 2005. Two stage multiple comparisons with the average for exponential location parameters under heteroscedasticity. Journal of Statistical Planning and Inference 134, 392–408]. When the additional sample for the second stage may not be available, one-stage procedures including one-sided and two-sided confidence intervals are proposed in this paper. These intervals can be used to identify a subset which includes all no-worse-than-the-average treatments in an experimental design and to identify better-than-the-average, worse-than-the-average and not-much-different-from-the-average products in agriculture, the stock market, pharmaceutical industries. Tables of upper limits of critical values are obtained using the technique given in Lam [Lam, K., 1987. Subset selection of normal populations under heteroscedasticity. In: Proceedings of the Second International Advanced Seminar/Workshop on Inference Procedures Associated with Statistical Ranking and Selection. Sydney, Australia. August 1987. Lam, K., 1988. An improved two-stage selection procedure. Communications in Statistics—Simulation and Computation 17 (3), 995–1006]. An example of comparing four drugs in the treatment of leukemia is given to demonstrate the proposed procedures. The relationship between the one-stage and the two-stage procedures is also elaborated in this paper.[[notice]]補正完畢[[incitationindex]]SCI[[booktype]]紙本[[booktype]]電子

    Multiple comparisons for exponential median lifetimes with the control based on doubly censored samples

    Get PDF
    [[abstract]]Under doubly censoring, the one-stage multiple comparison procedures with the control in terms of exponential median lifetimes are presented. The uniformly minimum variance unbiased estimator for median lifetime is found. The upper bounds, lower bounds and two-sided confidence intervals for the difference between each median lifetimes and the median lifetime of the control population are developed. Statistical tables of critical values are constructed for the practical use of our proposed procedures. Users can use these simultaneous confidence intervals to determine whether the performance of treatment populations is better than or worse than the control population in agriculture and pharmaceutical industries. At last, one practical example is provided to illustrate the proposed procedures.[[notice]]補正完

    A modified multiple comparisons with a control for exponential location parameters based on doubly censored sample under heteroscedasticity

    Get PDF
    [[abstract]]In this paper, a modified one-stage multiple comparison procedures with a control for exponential location parameters based on the doubly censored sample under heteroscedasticity is proposed. A simulation study is done and the results show that the proposed procedures have shorter confidence length with coverage probabilities closer to the nominal ones compared with the one proposed in Wu (2017). At last, an example of comparing the duration of remission for four drugs as the treatment of leukemia is given to demonstrate the proposed procedures.[[notice]]補正完

    One stage multiple comparisons with the control for exponential median lifetimes under heteroscedasticity

    Get PDF
    [[abstract]]When the additional sample for the second stage may not be available, one-stage multiple comparisons for exponential median lifetimes with the control under heteroscedasticity including one-sided and two-sided confidence intervals are proposed in this paper since the median is a more robust measure of central tendency compared to the mean. These intervals can be used to identify treatment populations that are better than the control or worse than the control in terms of median lifetimes in agriculture, stock market, pharmaceutical industries. Tables of critical values are obtained for practical use. An example of comparing the survival days for four categories of lung cancer in a standard chemotherapeutic agent is given to demonstrate the proposed procedures.[[notice]]補正完

    Generalizing Multistage Partition Procedures for Two-parameter Exponential Populations

    Get PDF
    ANOVA analysis is a classic tool for multiple comparisons and has been widely used in numerous disciplines due to its simplicity and convenience. The ANOVA procedure is designed to test if a number of different populations are all different. This is followed by usual multiple comparison tests to rank the populations. However, the probability of selecting the best population via ANOVA procedure does not guarantee the probability to be larger than some desired prespecified level. This lack of desirability of the ANOVA procedure was overcome by researchers in early 1950\u27s by designing experiments with the goal of selecting the best population. In this dissertation, a single-stage procedure is introduced to partition k treatments into good and bad groups with respect to a control population assuming some key parameters are known. Next, the proposed partition procedure is genaralized for the case when the parameters are unknown and a purely-sequential procedure and a two-stage procedure are derived. Theoretical asymptotic properties, such as first order and second order properties, of the proposed procedures are derived to document the efficiency of the proposed procedures. These theoretical properties are studied via Monte Carlo simulations to document the performance of the procedures for small and moderate sample sizes

    The Impact of the CIMMYT Wheat Breeding Program on Wheat Yields in Mexico's Yaqui Valley, 1990-2002: Implications for the Future of Public Wheat Breeding

    Get PDF
    CIMMYT has invested a large and significant amount of public expenditures in wheat breeding research each year for several decades. Estimates of the impact of the wheat breeding program on wheat yield increases provides information to scientists, administrators, and policy makers regarding the efficacy and the rate of return to these investments, providing important information for future funding decisions. Using CIMMYT test plot data from the Yaqui Valley in Mexico from 1990-2002, regression results indicate that the release of modern CIMMYT varieties has contributed approximately 53.77 kg/ha to yield annually. The growing conditions of the experiment fields located in the Yaqui Valley approximate 40% of the developing world's wheat growing conditions. A rough estimate of the gains attributed to CIMMYT's wheat breeding program on a global scale is 304 million (2002) USD annually during the period 1990-2002. CIMMYT's total wheat breeding cost in 2002 was approximately 6 million dollars, making the benefit cost ratio approximately 50 to 1.Crop Production/Industries,

    Volatility in high-frequency intensive care mortality time series: application of univariate and multivariate GARCH models

    Get PDF
    Mortality time series display time-varying volatility. The utility of statistical estimators from the financial time-series paradigm, which account for this characteristic, has not been addressed for high-frequency mortality series. Using daily mean-mortality series of an exemplar intensive care unit (ICU) from the Australian and New Zealand Intensive Care Society adult patient database, joint estimation of a mean and conditional variance (volatility) model for a stationary series was undertaken via univariate autoregressive moving average (ARMA, lags (p, q)), GARCH (Generalised Autoregressive Conditional Heteroscedasticity, lags (p, q)). The temporal dynamics of the conditional variance and correlations of multiple provider series, from rural/ regional, metropolitan, tertiary and private ICUs, were estimated utilising multivariate GARCH models. For the stationary first differenced series, an asymmetric power GARCH model (lags (1, 1)) with t distribution (degrees-offreedom, 11.6) and ARMA (7,0) for the mean-model, was the best-fitting. The four multivariate component series demonstrated varying trend mortality decline and persistent autocorrelation. Within each MGARCH series no model specification dominated. The conditional correlations were surprisingly low (<0.1) between tertiary series and substantial (0.4 - 0.6) between rural-regional and private series. The conditional-variances of both the univariate and multivariate series demonstrated a slow rate of time decline from periods of early volatility and volatility spikes.John L. Moran, Patricia J. Solomo

    Adapting image processing and clustering methods to productive efficiency analysis and benchmarking: A cross disciplinary approach

    Get PDF
    This dissertation explores the interdisciplinary applications of computational methods in quantitative economics. Particularly, this thesis focuses on problems in productive efficiency analysis and benchmarking that are hardly approachable or solvable using conventional methods. In productive efficiency analysis, null or zero values are often produced due to the wrong skewness or low kurtosis of the inefficiency distribution as against the distributional assumption on the inefficiency term. This thesis uses the deconvolution technique, which is traditionally used in image processing for noise removal, to develop a fully non-parametric method for efficiency estimation. Publications 1 and 2 are devoted to this topic, with focus being laid on the cross-sectional case and panel case, respectively. Through Monte-Carlo simulations and empirical applications to Finnish electricity distribution network data and Finnish banking data, the results show that the Richardson-Lucy blind deconvolution method is insensitive to the distributio-nal assumptions, robust to the data noise levels and heteroscedasticity on efficiency estimation. In benchmarking, which could be the next step of productive efficiency analysis, the 'best practice' target may not perform under the same operational environment with the DMU under study. This would render the benchmarks impractical to follow and adversely affects the managers to make the correct decisions on performance improvement of a DMU. This dissertation proposes a clustering-based benchmarking framework in Publication 3. The empirical study on Finnish electricity distribution network reveals that the proposed framework novels not only in its consideration on the differences of the operational environment among DMUs, but also its extreme flexibility. We conducted a comparison analysis on the different combinations of the clustering and efficiency estimation techniques using computational simulations and empirical applications to Finnish electricity distribution network data, based on which Publication 4 specifies an efficient combination for benchmarking in energy regulation.  This dissertation endeavors to solve problems in quantitative economics using interdisciplinary approaches. The methods developed benefit this field and the way how we approach the problems open a new perspective

    A Deep Learning Approach to Analyzing Continuous-Time Systems

    Full text link
    Scientists often use observational time series data to study complex natural processes, but regression analyses often assume simplistic dynamics. Recent advances in deep learning have yielded startling improvements to the performance of models of complex processes, but deep learning is generally not used for scientific analysis. Here we show that deep learning can be used to analyze complex processes, providing flexible function approximation while preserving interpretability. Our approach relaxes standard simplifying assumptions (e.g., linearity, stationarity, and homoscedasticity) that are implausible for many natural systems and may critically affect the interpretation of data. We evaluate our model on incremental human language processing, a domain with complex continuous dynamics. We demonstrate substantial improvements on behavioral and neuroimaging data, and we show that our model enables discovery of novel patterns in exploratory analyses, controls for diverse confounds in confirmatory analyses, and opens up research questions that are otherwise hard to study.Comment: Main article: 12 pages, 1 table, 3 figures; Supplementary Information: 54 pages, 6 tables, 30 figure
    corecore