74 research outputs found

    Normalizing Flow with Variational Latent Representation

    Full text link
    Normalizing flow (NF) has gained popularity over traditional maximum likelihood based methods due to its strong capability to model complex data distributions. However, the standard approach, which maps the observed data to a normal distribution, has difficulty in handling data distributions with multiple relatively isolated modes. To overcome this issue, we propose a new framework based on variational latent representation to improve the practical performance of NF. The idea is to replace the standard normal latent variable with a more general latent representation, jointly learned via Variational Bayes. For example, by taking the latent representation as a discrete sequence, our framework can learn a Transformer model that generates the latent sequence and an NF model that generates continuous data distribution conditioned on the sequence. The resulting method is significantly more powerful than the standard normalization flow approach for generating data distributions with multiple modes. Extensive experiments have shown the advantages of NF with variational latent representation.Comment: 24 pages, 7 figure

    Particle-based Variational Inference with Preconditioned Functional Gradient Flow

    Full text link
    Particle-based variational inference (VI) minimizes the KL divergence between model samples and the target posterior with gradient flow estimates. With the popularity of Stein variational gradient descent (SVGD), the focus of particle-based VI algorithms has been on the properties of functions in Reproducing Kernel Hilbert Space (RKHS) to approximate the gradient flow. However, the requirement of RKHS restricts the function class and algorithmic flexibility. This paper remedies the problem by proposing a general framework to obtain tractable functional gradient flow estimates. The functional gradient flow in our framework can be defined by a general functional regularization term that includes the RKHS norm as a special case. We use our framework to propose a new particle-based VI algorithm: preconditioned functional gradient flow (PFG). Compared with SVGD, the proposed method has several advantages: larger function class; greater scalability in large particle-size scenarios; better adaptation to ill-conditioned distributions; provable continuous-time convergence in KL divergence. Non-linear function classes such as neural networks can be incorporated to estimate the gradient flow. Both theory and experiments have shown the effectiveness of our framework.Comment: 34 pages, 8 figure

    Disentangled Generative Causal Representation Learning

    Full text link
    This paper proposes a Disentangled gEnerative cAusal Representation (DEAR) learning method. Unlike existing disentanglement methods that enforce independence of the latent variables, we consider the general case where the underlying factors of interests can be causally correlated. We show that previous methods with independent priors fail to disentangle causally correlated factors. Motivated by this finding, we propose a new disentangled learning method called DEAR that enables causal controllable generation and causal representation learning. The key ingredient of this new formulation is to use a structural causal model (SCM) as the prior for a bidirectional generative model. The prior is then trained jointly with a generator and an encoder using a suitable GAN loss incorporated with supervision. We provide theoretical justification on the identifiability and asymptotic consistency of the proposed method, which guarantees disentangled causal representation learning under appropriate conditions. We conduct extensive experiments on both synthesized and real data sets to demonstrate the effectiveness of DEAR in causal controllable generation, and the benefits of the learned representations for downstream tasks in terms of sample efficiency and distributional robustness

    RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment

    Full text link
    Generative foundation models are susceptible to implicit biases that can arise from extensive unsupervised training data. Such biases can produce suboptimal samples, skewed outcomes, and unfairness, with potentially significant repercussions. Consequently, aligning these models with human ethics and preferences is an essential step toward ensuring their responsible and effective deployment in real-world applications. Prior research has primarily employed Reinforcement Learning from Human Feedback (RLHF) as a means of addressing this problem, wherein generative models are fine-tuned using RL algorithms guided by a human-feedback-informed reward model. However, the inefficiencies and instabilities associated with RL algorithms frequently present substantial obstacles to the successful alignment of generative models, necessitating the development of a more robust and streamlined approach. To this end, we introduce a new framework, Reward rAnked FineTuning (RAFT), designed to align generative models more effectively. Utilizing a reward model and a sufficient number of samples, our approach selects the high-quality samples, discarding those that exhibit undesired behavior, and subsequently assembles a streaming dataset. This dataset serves as the basis for aligning the generative model and can be employed under both offline and online settings. Notably, the sample generation process within RAFT is gradient-free, rendering it compatible with black-box generators. Through extensive experiments, we demonstrate that our proposed algorithm exhibits strong performance in the context of both large language models and diffusion models

    25-Hydroxyvitamin D Levels and the Risk of Dementia and Alzheimer's Disease: A Dose–Response Meta-Analysis

    Get PDF
    Background and Purpose: Conclusions of previous cohort studies on the relationship between 25-hydroxyvitamin D level and the risk of dementia and Alzheimer's disease were not consistent. Thus, we performed a dose–response meta-analysis to evaluate this relationship by summarizing cohort studies.Methods: Pubmed, Embase, Cochrane, and Web of Science databases were searched for relevant studies. Cohort studies concerning the association between 25-hydroxyvitamin D level and dementia or Alzheimer's disease were included. Results of studies were pooled and the dose–response relationship was determined using a random-effect model.Results: Ten cohort studies, with 28,640 participants were included. A significant inverse relationship was found between 25-hydroxyvitamin D level and the risk of dementia and Alzheimer's disease. In addition, we found a linear dose–response relationship in that a 10 nmol/L increase in 25-hydroxyvitamin D level may lead to a 5% decrease in the risk of dementia (relative risk, 0.95; 95% confidence interval, 0.93–0.98) and 7% in the risk of Alzheimer's disease (relative risk, 0.93; 95% confidence interval, 0.89–0.97).Conclusion: Plasma or serum 25-hydroxyvitamin D concentration was inversely related to the risk of dementia and Alzheimer's disease, consistent with a linear dose–response relationship
    • …
    corecore