1,828 research outputs found

    Cooperative Distribution Alignment via JSD Upper Bound

    Full text link
    Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to a shared aligned distribution given only samples from each distribution. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning. Most prior works use adversarial learning (i.e., min-max optimization), which can be challenging to optimize and evaluate. A few recent works explore non-adversarial flow-based (i.e., invertible) approaches, but they lack a unified perspective and are limited in efficiently aligning multiple distributions. Therefore, we propose to unify and generalize previous flow-based approaches under a single non-adversarial framework, which we prove is equivalent to minimizing an upper bound on the Jensen-Shannon Divergence (JSD). Importantly, our problem reduces to a min-min, i.e., cooperative, problem and can provide a natural evaluation metric for unsupervised distribution alignment. We show empirical results on both simulated and real-world datasets to demonstrate the benefits of our approach. Code is available at https://github.com/inouye-lab/alignment-upper-bound.Comment: Accepted for publication in Advances in Neural Information Processing Systems 36 (NeurIPS 2022

    Towards Practical Non-Adversarial Distribution Alignment via Variational Bounds

    Full text link
    Distribution alignment can be used to learn invariant representations with applications in fairness and robustness. Most prior works resort to adversarial alignment methods but the resulting minimax problems are unstable and challenging to optimize. Non-adversarial likelihood-based approaches either require model invertibility, impose constraints on the latent prior, or lack a generic framework for alignment. To overcome these limitations, we propose a non-adversarial VAE-based alignment method that can be applied to any model pipeline. We develop a set of alignment upper bounds (including a noisy bound) that have VAE-like objectives but with a different perspective. We carefully compare our method to prior VAE-based alignment approaches both theoretically and empirically. Finally, we demonstrate that our novel alignment losses can replace adversarial losses in standard invariant representation learning pipelines without modifying the original architectures -- thereby significantly broadening the applicability of non-adversarial alignment methods

    Identification of the First Functional Toxin-Antitoxin System in Streptomyces

    Get PDF
    Toxin-antitoxin (TA) systems are widespread among the plasmids and genomes of bacteria and archaea. This work reports the first description of a functional TA system in Streptomyces that is identical in two species routinely used in the laboratory: Streptomyces lividans and S. coelicolor. The described system belongs to the YefM/YoeB family and has a considerable similarity to Escherichia coli YefM/YoeB (about 53% identity and 73% similarity). Lethal effect of the S. lividans putative toxin (YoeBsl) was observed when expressed alone in E. coli SC36 (MG1655 ΔyefM-yoeB). However, no toxicity was obtained when co-expression of the antitoxin and toxin (YefM/YoeBsl) was carried out. The toxic effect was also observed when the yoeBsl was cloned in multicopy in the wild-type S. lividans or in a single copy in a S. lividans mutant, in which this TA system had been deleted. The S. lividans YefM/YoeBsl complex, purified from E. coli, binds with high affinity to its own promoter region but not to other three random selected promoters from Streptomyces. In vivo experiments demonstrated that the expression of yoeBsl in E. coli blocks translation initiation processing mRNA at three bases downstream of the initiation codon after 2 minutes of induction. These results indicate that the mechanism of action is identical to that of YoeB from E. coli

    Towards Characterizing Domain Counterfactuals For Invertible Latent Causal Models

    Full text link
    Answering counterfactual queries has many important applications such as knowledge discovery and explainability, but is challenging when causal variables are unobserved and we only see a projection onto an observation space, for instance, image pixels. One approach is to recover the latent Structural Causal Model (SCM), but this typically needs unrealistic assumptions, such as linearity of the causal mechanisms. Another approach is to use na\"ive ML approximations, such as generative models, to generate counterfactual samples; however, these lack guarantees of accuracy. In this work, we strive to strike a balance between practicality and theoretical guarantees by focusing on a specific type of causal query called domain counterfactuals, which hypothesizes what a sample would have looked like if it had been generated in a different domain (or environment). Concretely, by only assuming invertibility, sparse domain interventions and access to observational data from different domains, we aim to improve domain counterfactual estimation both theoretically and practically with less restrictive assumptions. We define domain counterfactually equivalent models and prove necessary and sufficient properties for equivalent models that provide a tight characterization of the domain counterfactual equivalence classes. Building upon this result, we prove that every equivalence class contains a model where all intervened variables are at the end when topologically sorted by the causal DAG. This surprising result suggests that a model design that only allows intervention in the last kk latent variables may improve model estimation for counterfactuals. We then test this model design on extensive simulated and image-based experiments which show the sparse canonical model indeed improves counterfactual estimation over baseline non-sparse models
    • …
    corecore