161 research outputs found

    Accelerated Parallel Non-conjugate Sampling for Bayesian Non-parametric Models

    Full text link
    Inference of latent feature models in the Bayesian nonparametric setting is generally difficult, especially in high dimensional settings, because it usually requires proposing features from some prior distribution. In special cases, where the integration is tractable, we could sample new feature assignments according to a predictive likelihood. However, this still may not be efficient in high dimensions. We present a novel method to accelerate the mixing of latent variable model inference by proposing feature locations from the data, as opposed to the prior. First, we introduce our accelerated feature proposal mechanism that we will show is a valid Bayesian inference algorithm and next we propose an approximate inference strategy to perform accelerated inference in parallel. This sampling method is efficient for proper mixing of the Markov chain Monte Carlo sampler, computationally attractive, and is theoretically guaranteed to converge to the posterior distribution as its limiting distribution.Comment: Previously known as "Accelerated Inference for Latent Variable Models

    Simulating broken PT\cal PT-symmetric Hamiltonian systems by weak measurement

    Full text link
    By embedding a PT\cal PT-symmetric (pseudo-Hermitian) system into a large Hermitian one, we disclose the relations between PT\cal{PT}-symmetric Hamiltonians and weak measurement theory. We show that the amplification effect in weak measurement on a conventional quantum system can be used to effectively simulate a local broken PT\cal PT-symmetric Hamiltonian system, with the pre-selected state in the PT\cal PT-symmetric Hamiltonian system and its post-selected state resident in the dilated Hamiltonian system.Comment: 4 pages; with Supplemental Materia

    A Semi-Bayesian Nonparametric Estimator of the Maximum Mean Discrepancy Measure: Applications in Goodness-of-Fit Testing and Generative Adversarial Networks

    Full text link
    A classic inferential statistical problem is the goodness-of-fit (GOF) test. Such a test can be challenging when the hypothesized parametric model has an intractable likelihood and its distributional form is not available. Bayesian methods for GOF can be appealing due to their ability to incorporate expert knowledge through prior distributions. However, standard Bayesian methods for this test often require strong distributional assumptions on the data and their relevant parameters. To address this issue, we propose a semi-Bayesian nonparametric (semi-BNP) procedure in the context of the maximum mean discrepancy (MMD) measure that can be applied to the GOF test. Our method introduces a novel Bayesian estimator for the MMD, enabling the development of a measure-based hypothesis test for intractable models. Through extensive experiments, we demonstrate that our proposed test outperforms frequentist MMD-based methods by achieving a lower false rejection and acceptance rate of the null hypothesis. Furthermore, we showcase the versatility of our approach by embedding the proposed estimator within a generative adversarial network (GAN) framework. It facilitates a robust BNP learning approach as another significant application of our method. With our BNP procedure, this new GAN approach can enhance sample diversity and improve inferential accuracy compared to traditional techniques.Comment: Typos corrected, Secondary (simulation and theoretical) results added, Additional discussion added, references adde

    CIEM: Contrastive Instruction Evaluation Method for Better Instruction Tuning

    Full text link
    Nowadays, the research on Large Vision-Language Models (LVLMs) has been significantly promoted thanks to the success of Large Language Models (LLM). Nevertheless, these Vision-Language Models (VLMs) are suffering from the drawback of hallucination -- due to insufficient understanding of vision and language modalities, VLMs may generate incorrect perception information when doing downstream applications, for example, captioning a non-existent entity. To address the hallucination phenomenon, on the one hand, we introduce a Contrastive Instruction Evaluation Method (CIEM), which is an automatic pipeline that leverages an annotated image-text dataset coupled with an LLM to generate factual/contrastive question-answer pairs for the evaluation of the hallucination of VLMs. On the other hand, based on CIEM, we further propose a new instruction tuning method called CIT (the abbreviation of Contrastive Instruction Tuning) to alleviate the hallucination of VLMs by automatically producing high-quality factual/contrastive question-answer pairs and corresponding justifications for model tuning. Through extensive experiments on CIEM and CIT, we pinpoint the hallucination issues commonly present in existing VLMs, the disability of the current instruction-tuning dataset to handle the hallucination phenomenon and the superiority of CIT-tuned VLMs over both CIEM and public datasets
    • …
    corecore