306 research outputs found

    T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation

    Full text link
    Despite the stunning ability to generate high-quality images by recent text-to-image models, current approaches often struggle to effectively compose objects with different attributes and relationships into a complex and coherent scene. We propose T2I-CompBench, a comprehensive benchmark for open-world compositional text-to-image generation, consisting of 6,000 compositional text prompts from 3 categories (attribute binding, object relationships, and complex compositions) and 6 sub-categories (color binding, shape binding, texture binding, spatial relationships, non-spatial relationships, and complex compositions). We further propose several evaluation metrics specifically designed to evaluate compositional text-to-image generation. We introduce a new approach, Generative mOdel fine-tuning with Reward-driven Sample selection (GORS), to boost the compositional text-to-image generation abilities of pretrained text-to-image models. Extensive experiments and evaluations are conducted to benchmark previous methods on T2I-CompBench, and to validate the effectiveness of our proposed evaluation metrics and GORS approach. Project page is available at https://karine-h.github.io/T2I-CompBench/.Comment: Project page: https://karine-h.github.io/T2I-CompBench

    New Interpretations of Normalization Methods in Deep Learning

    Full text link
    In recent years, a variety of normalization methods have been proposed to help train neural networks, such as batch normalization (BN), layer normalization (LN), weight normalization (WN), group normalization (GN), etc. However, mathematical tools to analyze all these normalization methods are lacking. In this paper, we first propose a lemma to define some necessary tools. Then, we use these tools to make a deep analysis on popular normalization methods and obtain the following conclusions: 1) Most of the normalization methods can be interpreted in a unified framework, namely normalizing pre-activations or weights onto a sphere; 2) Since most of the existing normalization methods are scaling invariant, we can conduct optimization on a sphere with scaling symmetry removed, which can help stabilize the training of network; 3) We prove that training with these normalization methods can make the norm of weights increase, which could cause adversarial vulnerability as it amplifies the attack. Finally, a series of experiments are conducted to verify these claims.Comment: Accepted by AAAI 202

    Autophagy Protects the Blood-Brain Barrier Through Regulating the Dynamic of Claudin-5 in Short-Term Starvation

    Get PDF
    The blood-brain barrier (BBB) is essential for the exchange of nutrient and ions to maintain the homeostasis of central nervous system (CNS). BBB dysfunction is commonly associated with the disruption of endothelial tight junctions and excess permeability, which results in various CNS diseases. Therefore, maintaining the structural integrity and proper function of the BBB is essential for the homeostasis and physiological function of the CNS. Here, we showed that serum starvation disrupted the function of endothelial barrier as evidenced by decreased trans-endothelial electrical resistance, increased permeability, and redistribution of tight junction proteins such as Claudin-5 (Cldn5). Further analyses revealed that autophagy was activated and protected the integrity of endothelial barrier by scavenging ROS and inhibiting the redistribution of Cldn5 under starvation, as evidenced by accumulation of autophagic vacuoles and increased expression of LC3II/I, ATG5 and LAMP1. In addition, autophagosome was observed to package and eliminate the aggregated Cldn5 in cytosol as detected by immunoelectron microscopy (IEM) and stimulated emission depletion (STED) microscope. Moreover, Akt-mTOR-p70S6K pathway was found to be involved in the protective autophagy induced by starvation. Our data demonstrated that autophagy played an essential role in maintaining the integrity of endothelial barrier by regulating the localization of Cldn5 under starvation

    Global Positive Periodic Solutions of Generalized n

    Get PDF
    We consider the following generalized n-species Lotka-Volterra type and Gilpin-Ayala type competition systems with multiple delays and impulses: xi′(t)=xi(t)[ai(t)-bi(t)xi(t)-∑j=1n‍cij(t)xjαij(t-ρij(t))-∑j=1n‍dij(t)xjβij(t-τij(t))-∑j=1n‍eij(t)∫-ηij0‍kij(s)xjγij(t+s)ds-∑j=1n‍fij(t)∫-θij0‍Kij(ξ)xiδij(t+ξ)xjσij(t+ξ)dξ],a.e, t>0, t≠tk; xi(tk+)-xi(tk-)=hikxi(tk), i=1,2,…,n, k∈Z+. By applying the Krasnoselskii fixed-point theorem in a cone of Banach space, we derive some verifiable necessary and sufficient conditions for the existence of positive periodic solutions of the previously mentioned. As applications, some special cases of the previous system are examined and some earlier results are extended and improved

    Graphene-wrapped reversible reaction for advanced hydrogen storage

    Get PDF
    Here, we report the fabrication of a graphene-wrapped nanostructured reactive hydride composite, i.e., 2LiBH4-MgH2, made by adopting graphene-supported MgH2 nanoparticles (NPs) as the nanoreactor and heterogeneous nucleation sites. The porous structure, uniform distribution of MgH2 NPs, and the steric confinement by flexible graphene induced a homogeneous distribution of 2LiBH4-MgH2 nanocomposite on graphene with extremely high loading capacity (80 wt%) and energy density. The well-defined structural features, including even distribution, uniform particle size, excellent thermal stability, and robust architecture endow this composite with significant improvements in its hydrogen storage performance. For instance, at a temperature as low as 350 °C, a reversible storage capacity of up to 8.9 wt% H2, without degradation after 25 complete cycles, was achieved for the 2LiBH4-MgH2 anchored on graphene. The design of this three-dimensional architecture can offer a new concept for obtaining high performance materials in the energy storage field

    A Causal Framework to Unify Common Domain Generalization Approaches

    Full text link
    Domain generalization (DG) is about learning models that generalize well to new domains that are related to, but different from, the training domain(s). It is a fundamental problem in machine learning and has attracted much attention in recent years. A large number of approaches have been proposed. Different approaches are motivated from different perspectives, making it difficult to gain an overall understanding of the area. In this paper, we propose a causal framework for domain generalization and present an understanding of common DG approaches in the framework. Our work sheds new lights on the following questions: (1) What are the key ideas behind each DG method? (2) Why is it expected to improve generalization to new domains theoretically? (3) How are different DG methods related to each other and what are relative advantages and limitations? By providing a unified perspective on DG, we hope to help researchers better understand the underlying principles and develop more effective approaches for this critical problem in machine learning
    corecore