235 research outputs found

    Evidence against the energetic cost hypothesis for the short introns in highly expressed genes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In animals, the moss <it>Physcomitrella patens </it>and the pollen of <it>Arabidopsis thaliana</it>, highly expressed genes have shorter introns than weakly expressed genes. A popular explanation for this is selection for transcription efficiency, which includes two sub-hypotheses: to minimize the energetic cost or to minimize the time cost.</p> <p>Results</p> <p>In an individual human, different organs may differ up to hundreds of times in cell number (for example, a liver versus a hypothalamus). Considered at the individual level, a gene specifically expressed in a large organ is actually transcribed tens or hundreds of times more than a gene with a similar expression level (a measure of mRNA abundance per cell) specifically expressed in a small organ. According to the energetic cost hypothesis, the former should have shorter introns than the latter. However, in humans and mice we have not found significant differences in intron length between large-tissue/organ-specific genes and small-tissue/organ-specific genes with similar expression levels. Qualitative estimation shows that the deleterious effect (that is, the energetic burden) of long introns in highly expressed genes is too negligible to be efficiently selected against in mammals.</p> <p>Conclusion</p> <p>The short introns in highly expressed genes should not be attributed to energy constraint. We evaluated evidence for the time cost hypothesis and other alternatives.</p

    Stability of Strutinsky Shell Correction Energy in Relativistic Mean Field Theory

    Full text link
    The single-particle spectrum obtained from the relativistic mean field (RMF) theory is used to extract the shell correction energy with the Strutinsky method. Considering the delicate balance between the plateau condition in the Strutinsky smoothing procedure and the convergence for the total binding energy, the proper space sizes used in solving the RMF equations are investigated in detail by taking 208Pb as an example. With the proper space sizes, almost the same shell correction energies are obtained by solving the RMF equations either on basis space or in coordinate space.Comment: 9 pages, 4 figure

    Mixed State Entanglement for Holographic Axion Model

    Full text link
    We study the mixed state entanglement in a holographic axion model. We find that the holographic entanglement entropy (HEE), mutual information (MI) and entanglement of purification (EoP) exhibit very distinct behaviors with system parameters. The HEE exhibits universal monotonic behavior with system parameters, while the behaviors of MI and EoP relate to the specific system parameters and configurations. We find that MI and EoP can characterize mixed state entanglement better than HEE since they are less affected by thermal effects. Moreover, we argue that EoP is more suitable for describing mixed state entanglement than MI. Because the MI of large configurations are still dictated by the thermal entropy, while the EoP will never be controlled only by the thermal effects.Comment: 20 pages, 13 figure

    E2-AEN: End-to-End Incremental Learning with Adaptively Expandable Network

    Full text link
    Expandable networks have demonstrated their advantages in dealing with catastrophic forgetting problem in incremental learning. Considering that different tasks may need different structures, recent methods design dynamic structures adapted to different tasks via sophisticated skills. Their routine is to search expandable structures first and then train on the new tasks, which, however, breaks tasks into multiple training stages, leading to suboptimal or overmuch computational cost. In this paper, we propose an end-to-end trainable adaptively expandable network named E2-AEN, which dynamically generates lightweight structures for new tasks without any accuracy drop in previous tasks. Specifically, the network contains a serial of powerful feature adapters for augmenting the previously learned representations to new tasks, and avoiding task interference. These adapters are controlled via an adaptive gate-based pruning strategy which decides whether the expanded structures can be pruned, making the network structure dynamically changeable according to the complexity of the new tasks. Moreover, we introduce a novel sparsity-activation regularization to encourage the model to learn discriminative features with limited parameters. E2-AEN reduces cost and can be built upon any feed-forward architectures in an end-to-end manner. Extensive experiments on both classification (i.e., CIFAR and VDD) and detection (i.e., COCO, VOC and ICCV2021 SSLAD challenge) benchmarks demonstrate the effectiveness of the proposed method, which achieves the new remarkable results
    • …
    corecore