49,221 research outputs found

    Robust Modeling Using Non-Elliptically Contoured Multivariate t Distributions

    Full text link
    Models based on multivariate t distributions are widely applied to analyze data with heavy tails. However, all the marginal distributions of the multivariate t distributions are restricted to have the same degrees of freedom, making these models unable to describe different marginal heavy-tailedness. We generalize the traditional multivariate t distributions to non-elliptically contoured multivariate t distributions, allowing for different marginal degrees of freedom. We apply the non-elliptically contoured multivariate t distributions to three widely-used models: the Heckman selection model with different degrees of freedom for selection and outcome equations, the multivariate Robit model with different degrees of freedom for marginal responses, and the linear mixed-effects model with different degrees of freedom for random effects and within-subject errors. Based on the Normal mixture representation of our t distribution, we propose efficient Bayesian inferential procedures for the model parameters based on data augmentation and parameter expansion. We show via simulation studies and real examples that the conclusions are sensitive to the existence of different marginal heavy-tailedness

    A Sparse Graph-Structured Lasso Mixed Model for Genetic Association with Confounding Correction

    Full text link
    While linear mixed model (LMM) has shown a competitive performance in correcting spurious associations raised by population stratification, family structures, and cryptic relatedness, more challenges are still to be addressed regarding the complex structure of genotypic and phenotypic data. For example, geneticists have discovered that some clusters of phenotypes are more co-expressed than others. Hence, a joint analysis that can utilize such relatedness information in a heterogeneous data set is crucial for genetic modeling. We proposed the sparse graph-structured linear mixed model (sGLMM) that can incorporate the relatedness information from traits in a dataset with confounding correction. Our method is capable of uncovering the genetic associations of a large number of phenotypes together while considering the relatedness of these phenotypes. Through extensive simulation experiments, we show that the proposed model outperforms other existing approaches and can model correlation from both population structure and shared signals. Further, we validate the effectiveness of sGLMM in the real-world genomic dataset on two different species from plants and humans. In Arabidopsis thaliana data, sGLMM behaves better than all other baseline models for 63.4% traits. We also discuss the potential causal genetic variation of Human Alzheimer's disease discovered by our model and justify some of the most important genetic loci.Comment: Code available at https://github.com/YeWenting/sGLM

    Virtual noiseless amplification and Gaussian post-selection in continuous-variable quantum key distribution

    Full text link
    The noiseless amplification or attenuation are two heralded filtering operations that enable respectively to increase or decrease the mean field of any quantum state of light with no added noise, at the cost of a small success probability. We show that inserting such noiseless operations in a transmission line improves the performance of continuous-variable quantum key distribution over this line. Remarkably, these noiseless operations do not need to be physically implemented but can simply be simulated in the data post-processing stage. Hence, virtual noiseless amplification or attenuation amounts to perform a Gaussian post-selection, which enhances the secure range or tolerable excess noise while keeping the benefits of Gaussian security proofs.Comment: 8 pages, 5 figure

    Continuous variable entanglement distillation of Non-Gaussian Mixed States

    Get PDF
    Many different quantum information communication protocols such as teleportation, dense coding and entanglement based quantum key distribution are based on the faithful transmission of entanglement between distant location in an optical network. The distribution of entanglement in such a network is however hampered by loss and noise that is inherent in all practical quantum channels. Thus, to enable faithful transmission one must resort to the protocol of entanglement distillation. In this paper we present a detailed theoretical analysis and an experimental realization of continuous variable entanglement distillation in a channel that is inflicted by different kinds of non-Gaussian noise. The continuous variable entangled states are generated by exploiting the third order non-linearity in optical fibers, and the states are sent through a free-space laboratory channel in which the losses are altered to simulate a free-space atmospheric channel with varying losses. We use linear optical components, homodyne measurements and classical communication to distill the entanglement, and we find that by using this method the entanglement can be probabilistically increased for some specific non-Gaussian noise channels

    Fixed effects selection in the linear mixed-effects model using adaptive ridge procedure for L0 penalty performance

    Full text link
    This paper is concerned with the selection of fixed effects along with the estimation of fixed effects, random effects and variance components in the linear mixed-effects model. We introduce a selection procedure based on an adaptive ridge (AR) penalty of the profiled likelihood, where the covariance matrix of the random effects is Cholesky factorized. This selection procedure is intended to both low and high-dimensional settings where the number of fixed effects is allowed to grow exponentially with the total sample size, yielding technical difficulties due to the non-convex optimization problem induced by L0 penalties. Through extensive simulation studies, the procedure is compared to the LASSO selection and appears to enjoy the model selection consistency as well as the estimation consistency

    Regularization for Generalized Additive Mixed Models by Likelihood-Based Boosting

    Get PDF
    With the emergence of semi- and nonparametric regression the generalized linear mixed model has been expanded to account for additive predictors. In the present paper an approach to variable selection is proposed that works for generalized additive mixed models. In contrast to common procedures it can be used in high-dimensional settings where many covariates are available and the form of the influence is unknown. It is constructed as a componentwise boosting method and hence is able to perform variable selection. The complexity of the resulting estimator is determined by information criteria. The method is nvestigated in simulation studies for binary and Poisson responses and is illustrated by using real data sets
    • …
    corecore