155 research outputs found

    Exact Simulation for Diffusion Bridges: An Adaptive Approach

    Get PDF
    Exact simulation approaches for a class of diffusion bridges have recently been proposed based on rejection sampling techniques. The existing rejection sampling methods may not be practical owing to small acceptance probabilities. In this paper we propose an adaptive approach that improves the existing methods significantly under certain scenarios. The idea of the new method is based on a layered process, which can be simulated from a layered Brownian motion with reweighted layer probabilities. We will show that the new exact simulation method is more efficient than existing methods theoretically and via simulation

    A review on the exact Monte Carlo simulation

    Get PDF
    Perfect Monte Carlo sampling refers to sampling random realizations exactly from the target distributions (without any statistical error). Although many different methods have been developed and various applications have been implemented in the area of perfect Monte Carlo sampling, it is mostly referred by researchers to coupling from the past (CFTP) which can correct the statistical errors for the Monte Carlo samples generated by Markov chain Monte Carlo (MCMC) algorithms. This paper provides a brief review on the recent developments and applications in CFTP and other perfect Monte Carlo sampling methods

    Unbiased Estimation for Linear Regression When n < v

    Get PDF

    A class of nonparametric bivariate survival function estimators for randomly censored and truncated data

    Get PDF
    This paper proposes a class of nonparametric estimators for the bivariate survival function estimation under both random truncation and random censoring. In practice, the pair of random variables under consideration may have certain parametric relationship. The proposed class of nonparametric estimators uses such parametric information via a data transformation approach and thus provides more accurate estimates than existing methods without using such information. The large sample properties of the new class of estimators and a general guidance of how to find a good data transformation are given. The proposed method is also justified via a simulation study and an application on an economic data set.</p

    Mean Empirical Likelihood

    Get PDF
    Empirical likelihood methods are widely used in different settings to construct the confidence regions for parameters which satisfy the moment constraints. However, the empirical likelihood ratio confidence regions may have poor accuracy, especially for small sample sizes and multi-dimensional situations. A novel Mean Empirical Likelihood (MEL) method is proposed. A new pseudo dataset using the means of observation values is constructed to define the empirical likelihood ratio and it is proved that this MEL ratio satisfies Wilks’ theorem. Simulations with different examples are given to assess its finite sample performance, which shows that the confidence regions constructed by Mean Empirical Likelihood are much more accurate than that of the other Empirical Likelihood methods

    Perfect sampling methods for random forests

    Full text link

    Exact Simulation for Diffusion Bridges: An Adaptive Approach

    Full text link

    Robust inference for the unification of confidence intervals in meta-analysis

    Full text link
    Traditional meta-analysis assumes that the effect sizes estimated in individual studies follow a Gaussian distribution. However, this distributional assumption is not always satisfied in practice, leading to potentially biased results. In the situation when the number of studies, denoted as K, is large, the cumulative Gaussian approximation errors from each study could make the final estimation unreliable. In the situation when K is small, it is not realistic to assume the random-effect follows Gaussian distribution. In this paper, we present a novel empirical likelihood method for combining confidence intervals under the meta-analysis framework. This method is free of the Gaussian assumption in effect size estimates from individual studies and from the random-effects. We establish the large-sample properties of the non-parametric estimator, and introduce a criterion governing the relationship between the number of studies, K, and the sample size of each study, n_i. Our methodology supersedes conventional meta-analysis techniques in both theoretical robustness and computational efficiency. We assess the performance of our proposed methods using simulation studies, and apply our proposed methods to two examples

    Efficient Empirical Likelihood Inference for recovery rate of COVID-19 under Double-Censoring

    Get PDF
    Doubly censored data are very common in epidemiology studies. Ignoring censorship in the analysis may lead to biased parameter estimation. In this paper, we highlight that the publicly available COVID19 data may involve high percentage of double-censoring and point out the importance of dealing with such missing information in order to achieve better forecasting results. Existing statistical methods for doubly censored data may suffer from the convergence problems of the EM algorithms or may not be good enough for small sample sizes. This paper develops a new empirical likelihood method to analyse the recovery rate of COVID19 based on a doubly censored dataset. The efficient influence function of the parameter of interest is used to define the empirical likelihood (EL) ratio. We prove that 2log-2\log(EL-ratio) asymptotically follows a standard χ2\chi^2 distribution. This new method does not require any scale parameter adjustment for the log-likelihood ratio and thus does not suffer from the convergence problems involved in traditional EM-type algorithms. Finite sample simulation results show that this method provides much less biased estimate than existing methods, when censoring percentage is large. The method application to the COVID19 data will help researchers in other field to achieve better estimates and forecasting results
    corecore