8 research outputs found

    BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling

    Full text link
    With the introduction of the variational autoencoder (VAE), probabilistic latent variable models have received renewed attention as powerful generative models. However, their performance in terms of test likelihood and quality of generated samples has been surpassed by autoregressive models without stochastic units. Furthermore, flow-based models have recently been shown to be an attractive alternative that scales well to high-dimensional data. In this paper we close the performance gap by constructing VAE models that can effectively utilize a deep hierarchy of stochastic variables and model complex covariance structures. We introduce the Bidirectional-Inference Variational Autoencoder (BIVA), characterized by a skip-connected generative model and an inference network formed by a bidirectional stochastic inference path. We show that BIVA reaches state-of-the-art test likelihoods, generates sharp and coherent natural images, and uses the hierarchy of latent variables to capture different aspects of the data distribution. We observe that BIVA, in contrast to recent results, can be used for anomaly detection. We attribute this to the hierarchy of latent variables which is able to extract high-level semantic features. Finally, we extend BIVA to semi-supervised classification tasks and show that it performs comparably to state-of-the-art results by generative adversarial networks

    Optimal Variance Control of the Score Function Gradient Estimator for Importance Weighted Bounds

    Full text link
    This paper introduces novel results for the score function gradient estimator of the importance weighted variational bound (IWAE). We prove that in the limit of large KK (number of importance samples) one can choose the control variate such that the Signal-to-Noise ratio (SNR) of the estimator grows as K\sqrt{K}. This is in contrast to the standard pathwise gradient estimator where the SNR decreases as 1/K1/\sqrt{K}. Based on our theoretical findings we develop a novel control variate that extends on VIMCO. Empirically, for the training of both continuous and discrete generative models, the proposed method yields superior variance reduction, resulting in an SNR for IWAE that increases with KK without relying on the reparameterization trick. The novel estimator is competitive with state-of-the-art reparameterization-free gradient estimators such as Reweighted Wake-Sleep (RWS) and the thermodynamic variational objective (TVO) when training generative models

    Can large language models reason about medical questions?

    No full text
    Although large language models often produce impressive outputs, it remains unclear how they perform in real-world scenarios requiring strong reasoning skills and expert domain knowledge. We set out to investigate whether closed- and open-source models (GPT-3.5, Llama 2, etc.) can be applied to answer and reason about difficult real-world-based questions. We focus on three popular medical benchmarks (MedQA-US Medical Licensing Examination [USMLE], MedMCQA, and PubMedQA) and multiple prompting scenarios: chain of thought (CoT; think step by step), few shot, and retrieval augmentation. Based on an expert annotation of the generated CoTs, we found that InstructGPT can often read, reason, and recall expert knowledge. Last, by leveraging advances in prompt engineering (few-shot and ensemble methods), we demonstrated that GPT-3.5 not only yields calibrated predictive distributions but also reaches the passing score on three datasets: MedQA-USMLE (60.2%), MedMCQA (62.7%), and PubMedQA (78.2%). Open-source models are closing the gap: Llama 2 70B also passed the MedQA-USMLE with 62.5% accuracy

    Image Super-Resolution with Deep Variational Autoencoders

    No full text
    Image super-resolution (SR) techniques are used to generate a high-resolution image from a low-resolution image. Until now, deep generative models such as autoregressive models and Generative Adversarial Networks (GANs) have proven to be effective at modelling high-resolution images. Models based on Variational Autoencoders (VAEs) have often been criticized for their feeble generative performance, but with new advancements such as VDVAE (very deep VAE), there is now strong evidence that deep VAEs have the potential to outperform current state-of-the-art models for high-resolution image generation. In this paper, we introduce VDVAE-SR, a new model that aims to exploit the most recent deep VAE methodologies to improve upon image super-resolution using transfer learning on pretrained VDVAEs. Through qualitative and quantitative evaluations, we show that the proposed model is competitive with other state-of-the-art methods

    ThoughtSource: A central hub for large language model reasoning data

    No full text
    Abstract Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to ‘hallucinate’ facts, and there are concerns about their underlying biases. Letting models verbalize reasoning steps as natural language, a technique known as chain-of-thought prompting, has recently been proposed as a way to address some of these issues. Here we present ThoughtSource, a meta-dataset and software library for chain-of-thought (CoT) reasoning. The goal of ThoughtSource is to improve future artificial intelligence systems by facilitating qualitative understanding of CoTs, enabling empirical evaluations, and providing training data. This first release of ThoughtSource integrates seven scientific/medical, three general-domain and five math word question answering datasets

    FindZebra online search delving into rare disease case reports using natural language processing.

    Get PDF
    Early diagnosis is crucial for well-being and life quality of the rare disease patient. Access to the most complete knowledge about diseases through intelligent user interfaces can play an important role in supporting the physician reaching the correct diagnosis. Case reports may offer information about heterogeneous phenotypes which often further complicate rare disease diagnosis. The rare disease search engine FindZebra.com is extended to also access case report abstracts extracted from PubMed for several diseases. A search index for each disease is built in Apache Solr adding age, sex and clinical features extracted using text segmentation to enhance the specificity of search. Clinical experts performed retrospective validation of the search engine, utilising real-world Outcomes Survey data on Gaucher and Fabry patients. Medical experts evaluated the search results as being clinically relevant for the Fabry patients and less clinically relevant for the Gaucher patients. The shortcomings for Gaucher patients mainly reflect a mismatch between the current understanding and treatment of the disease and how it is reported in PubMed, notably in the older case reports. In response to this observation, a filter for the publication date was added in the final version of the tool available from deep.findzebra.com/ with = gaucher, fabry, hae (Hereditary angioedema)
    corecore