289 research outputs found

    The Gratuitous Repair on Undamaged DNA Misfold

    Get PDF

    Transcription of AAT•ATT Triplet Repeats in Escherichia coli Is Silenced by H-NS and IS1E Transposition

    Get PDF
    The trinucleotide repeats AAT•ATT are simple DNA sequences that potentially form different types of non-B DNA secondary structures and cause genomic instabilities in vivo.The molecular mechanism underlying the maintenance of a 24-triplet AAT•ATT repeat was examined in E. coli by cloning the repeats into the EcoRI site in plasmid pUC18 and into the attB site on the E. coli genome. Either the AAT or the ATT strand acted as lagging strand template in a replication fork. Propagations of the repeats in either orientation on plasmids did not affect colony morphology when triplet repeat transcription using the lacZ promoter was repressed either by supplementing LacI(Q)in trans or by adding glucose into the medium. In contrast, transparent colonies were formed by inducing transcription of the repeats, suggesting that transcription of AAT•ATT repeats was toxic to cell growth. Meanwhile, significant IS1E transposition events were observed both into the triplet repeats region proximal to the promoter side, the promoter region of the lacZ gene, and into the AAT•ATT region itself. Transposition reversed the transparent colony phenotype back into healthy, convex colonies. In contrast, transcription of an 8-triplet AAT•ATT repeat in either orientation on plasmids did not produce significant changes in cell morphology and did not promote IS1E transposition events. We further found that a role of IS1E transposition into plasmids was to inhibit transcription through the repeats, which was influenced by the presence of the H-NS protein, but not of its paralogue StpA.Our findings thus suggest that the longer AAT•ATT triplet repeats in E. coli become vulnerable after transcription. H-NS and its facilitated IS1E transposition can silence long triplet repeats transcription and preserve cell growth and survival

    A Knowledge-Guided Framework for Frame Identification

    Get PDF

    Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models

    Full text link
    This paper presents a controllable text-to-video (T2V) diffusion model, named Video-ControlNet, that generates videos conditioned on a sequence of control signals, such as edge or depth maps. Video-ControlNet is built on a pre-trained conditional text-to-image (T2I) diffusion model by incorporating a spatial-temporal self-attention mechanism and trainable temporal layers for efficient cross-frame modeling. A first-frame conditioning strategy is proposed to facilitate the model to generate videos transferred from the image domain as well as arbitrary-length videos in an auto-regressive manner. Moreover, Video-ControlNet employs a novel residual-based noise initialization strategy to introduce motion prior from an input video, producing more coherent videos. With the proposed architecture and strategies, Video-ControlNet can achieve resource-efficient convergence and generate superior quality and consistent videos with fine-grained control. Extensive experiments demonstrate its success in various video generative tasks such as video editing and video style transfer, outperforming previous methods in terms of consistency and quality. Project Page: https://controlavideo.github.io

    UGC: Unified GAN Compression for Efficient Image-to-Image Translation

    Full text link
    Recent years have witnessed the prevailing progress of Generative Adversarial Networks (GANs) in image-to-image translation. However, the success of these GAN models hinges on ponderous computational costs and labor-expensive training data. Current efficient GAN learning techniques often fall into two orthogonal aspects: i) model slimming via reduced calculation costs; ii)data/label-efficient learning with fewer training data/labels. To combine the best of both worlds, we propose a new learning paradigm, Unified GAN Compression (UGC), with a unified optimization objective to seamlessly prompt the synergy of model-efficient and label-efficient learning. UGC sets up semi-supervised-driven network architecture search and adaptive online semi-supervised distillation stages sequentially, which formulates a heterogeneous mutual learning scheme to obtain an architecture-flexible, label-efficient, and performance-excellent model
    • …
    corecore