581 research outputs found

    A systems biology approach identifies a regulator, BplERF1, of cold tolerance in Betula platyphylla

    Get PDF
    Cold is an abiotic stress that can greatly affect the growth and survival of plants. Here, we reported that an AP2/ERF family gene, BplERF1, isolated from Betula platyphylla played a contributing role in cold stress tolerance. Overexpression of BplERF1 in B. platyphylla transgenic lines enhanced cold stress tolerance by increasing the scavenging capability and reducing H2O2 and malondialdehyde (MDA) content in transgenic plants. Construction of BplERF-mediated multilayered hierarchical gene regulatory network (ML-hGRN), using Top-down GGM algorithm and the transcriptomic data of BplERF1 overexpression lines, led to the identification of five candidate target genes of BplERF1 which include MPK20, ERF9, WRKY53, WRKY70, and GIA1. All of them were then verified to be the true target genes of BplERF1 by chromatin-immunoprecipitation PCR (ChIP-PCR) assay. Our results indicate that BplERF1 is a positive regulator of cold tolerance and is capable of exerting regulation on the expression of cold signaling and regulatory genes, causing mitigation of reactive oxygen species

    Mining and Predicting Smart Device User Behavior

    Get PDF
    Three types of user behavior are mined in this paper: application usage, smart device usage and periodicity of user behavior. When mining application usage, the application installation, most frequently used applications and application correlation are analyzed. The application usage is long-tailed. When mining the device usage, the mean, variance and autocorrelation are calculated both for duration and interval. Both the duration and interval are long-tailed but only duration satisfies power-law distribution. Meanwhile, the autocorrelation of both duration and interval is weak, which makes predicting user behavior based on adjacent behavior not so reasonable in related works. Then DFT (Discrete Fourier Transform) is utilized to analyze the periodicity of user behavior and results show that the most obvious periodicity is 24 hours, which is in agreement with related works. Based on the results above, an improved user behavior predicting model is proposed based on Chebyshev inequality. Experiment results show that the performance is good in accurate rate and recall rate

    The diverse roles of cytokinins in regulating leaf development

    Get PDF
    Leaves provide energy for plants, and consequently for animals, through photosynthesis. Despite their important functions, plant leaf developmental processes and their underlying mechanisms have not been well characterized. Here, we provide a holistic description of leaf developmental processes that is centered on cytokinins and their signaling functions. Cytokinins maintain the growth potential (pluripotency) of shoot apical meristems, which provide stem cells for the generation of leaf primordia during the initial stage of leaf formation; cytokinins and auxins, as well as their interaction, determine the phyllotaxis pattern. The activities of cytokinins in various regions of the leaf, especially at the margins, collectively determine the final leaf morphology (e.g., simple or compound). The area of a leaf is generally determined by the number and size of the cells in the leaf. Cytokinins promote cell division and increase cell expansion during the proliferation and expansion stages of leaf cell development, respectively. During leaf senescence, cytokinins reduce sugar accumulation, increase chlorophyll synthesis, and prolong the leaf photosynthetic period. We also briefly describe the roles of other hormones, including auxin and ethylene, during the whole leaf developmental process. In this study, we review the regulatory roles of cytokinins in various leaf developmental stages, with a focus on cytokinin metabolism and signal transduction processes, in order to shed light on the molecular mechanisms underlying leaf development

    Hierarchical Pruning of Deep Ensembles with Focal Diversity

    Full text link
    Deep neural network ensembles combine the wisdom of multiple deep neural networks to improve the generalizability and robustness over individual networks. It has gained increasing popularity to study deep ensemble techniques in the deep learning community. Some mission-critical applications utilize a large number of deep neural networks to form deep ensembles to achieve desired accuracy and resilience, which introduces high time and space costs for ensemble execution. However, it still remains a critical challenge whether a small subset of the entire deep ensemble can achieve the same or better generalizability and how to effectively identify these small deep ensembles for improving the space and time efficiency of ensemble execution. This paper presents a novel deep ensemble pruning approach, which can efficiently identify smaller deep ensembles and provide higher ensemble accuracy than the entire deep ensemble of a large number of member networks. Our hierarchical ensemble pruning approach (HQ) leverages three novel ensemble pruning techniques. First, we show that the focal diversity metrics can accurately capture the complementary capacity of the member networks of an ensemble, which can guide ensemble pruning. Second, we design a focal diversity based hierarchical pruning approach, which will iteratively find high quality deep ensembles with low cost and high accuracy. Third, we develop a focal diversity consensus method to integrate multiple focal diversity metrics to refine ensemble pruning results, where smaller deep ensembles can be effectively identified to offer high accuracy, high robustness and high efficiency. Evaluated using popular benchmark datasets, we demonstrate that the proposed hierarchical ensemble pruning approach can effectively identify high quality deep ensembles with better generalizability while being more time and space efficient in ensemble decision making.Comment: To appear on ACM Transactions on Intelligent Systems and Technolog

    Metric-aligned Sample Selection and Critical Feature Sampling for Oriented Object Detection

    Full text link
    Arbitrary-oriented object detection is a relatively emerging but challenging task. Although remarkable progress has been made, there still remain many unsolved issues due to the large diversity of patterns in orientation, scale, aspect ratio, and visual appearance of objects in aerial images. Most of the existing methods adopt a coarse-grained fixed label assignment strategy and suffer from the inconsistency between the classification score and localization accuracy. First, to align the metric inconsistency between sample selection and regression loss calculation caused by fixed IoU strategy, we introduce affine transformation to evaluate the quality of samples and propose a distance-based label assignment strategy. The proposed metric-aligned selection (MAS) strategy can dynamically select samples according to the shape and rotation characteristic of objects. Second, to further address the inconsistency between classification and localization, we propose a critical feature sampling (CFS) module, which performs localization refinement on the sampling location for classification task to extract critical features accurately. Third, we present a scale-controlled smooth L1L_1 loss (SC-Loss) to adaptively select high quality samples by changing the form of regression loss function based on the statistics of proposals during training. Extensive experiments are conducted on four challenging rotated object detection datasets DOTA, FAIR1M-1.0, HRSC2016, and UCAS-AOD. The results show the state-of-the-art accuracy of the proposed detector

    Rethinking Learning Rate Tuning in the Era of Large Language Models

    Full text link
    Large Language Models (LLMs) represent the recent success of deep learning in achieving remarkable human-like predictive performance. It has become a mainstream strategy to leverage fine-tuning to adapt LLMs for various real-world applications due to the prohibitive expenses associated with LLM training. The learning rate is one of the most important hyperparameters in LLM fine-tuning with direct impacts on both fine-tuning efficiency and fine-tuned LLM quality. Existing learning rate policies are primarily designed for training traditional deep neural networks (DNNs), which may not work well for LLM fine-tuning. We reassess the research challenges and opportunities of learning rate tuning in the coming era of Large Language Models. This paper makes three original contributions. First, we revisit existing learning rate policies to analyze the critical challenges of learning rate tuning in the era of LLMs. Second, we present LRBench++ to benchmark learning rate policies and facilitate learning rate tuning for both traditional DNNs and LLMs. Third, our experimental analysis with LRBench++ demonstrates the key differences between LLM fine-tuning and traditional DNN training and validates our analysis
    • …
    corecore