65 research outputs found

    Federated Learning with Classifier Shift for Class Imbalance

    Full text link
    Federated learning aims to learn a global model collaboratively while the training data belongs to different clients and is not allowed to be exchanged. However, the statistical heterogeneity challenge on non-IID data, such as class imbalance in classification, will cause client drift and significantly reduce the performance of the global model. This paper proposes a simple and effective approach named FedShift which adds the shift on the classifier output during the local training phase to alleviate the negative impact of class imbalance. We theoretically prove that the classifier shift in FedShift can make the local optimum consistent with the global optimum and ensure the convergence of the algorithm. Moreover, our experiments indicate that FedShift significantly outperforms the other state-of-the-art federated learning approaches on various datasets regarding accuracy and communication efficiency

    Overexpression of BplERD15 enhances drought tolerance in Betula platyphylla Suk

    Get PDF
    In this study, we report the cloning and functional characterization of an early responsive gene, BplERD15, from Betula platyphylla Suk to dehydration. BplERD15 is located in the same branch as Morus indica Linnaeus ERD15 and Arabidopsis Heynh ERD15 in the phylogenetic tree built with ERD family protein sequences. The tissue-specific expression patterns of BplERD15 were characterized using qRT-PCR and the results showed that the transcript levels of BplERD15 in six tissues were ranked from the highest to the lowest levels as the following: mature leaves (ML) \u3e young leaves (YL) \u3e roots (R) \u3ebuds (B) \u3eyoung stems (YS) \u3emature stems (MS). Multiple drought experiments were simulated by adding various osmotica including polyethylene glycol, mannitol, and NaCl to the growth media to decrease their water potentials, and the results showed that the expression of BplERD15 could be induced to 12, 9, and 10 folds, respectively, within a 48 h period. However, the expression level of BplERD15 was inhibited by the plant hormone abscisic acid in the early response and then restored to the level of control. The BplERD15 overexpression (OE) transgenic birch lines were developed and they did not exhibit any phenotypic anomalies and growth deficiency under normal condition. Under drought condition, BplERD15-OE1, 3, and 4 all displayed some drought tolerant characteristics and survived from the drought while the wild type (WT) plants withered and then died. Analysis showed that all BplERD15-OE lines had significant lower electrolyte leakage levels as compared to WT. Our study suggests that BplERD15 is a drought-responsive gene that can reduce mortality under stress condition

    A systems biology approach identifies a regulator, BplERF1, of cold tolerance in Betula platyphylla

    Get PDF
    Cold is an abiotic stress that can greatly affect the growth and survival of plants. Here, we reported that an AP2/ERF family gene, BplERF1, isolated from Betula platyphylla played a contributing role in cold stress tolerance. Overexpression of BplERF1 in B. platyphylla transgenic lines enhanced cold stress tolerance by increasing the scavenging capability and reducing H2O2 and malondialdehyde (MDA) content in transgenic plants. Construction of BplERF-mediated multilayered hierarchical gene regulatory network (ML-hGRN), using Top-down GGM algorithm and the transcriptomic data of BplERF1 overexpression lines, led to the identification of five candidate target genes of BplERF1 which include MPK20, ERF9, WRKY53, WRKY70, and GIA1. All of them were then verified to be the true target genes of BplERF1 by chromatin-immunoprecipitation PCR (ChIP-PCR) assay. Our results indicate that BplERF1 is a positive regulator of cold tolerance and is capable of exerting regulation on the expression of cold signaling and regulatory genes, causing mitigation of reactive oxygen species

    Growth-regulating factor 5 (GRF5)-mediated gene regulatory network promotes leaf growth and expansion in poplar

    Get PDF
    Although polyploid plants have larger leaves than their diploid counterparts, the molecular mechanisms underlying this difference (or trait) remain elusive. Differentially expressed genes (DEGs) between triploid and full-sib diploid poplar trees were identified from two transcriptomic data sets followed by a gene association study among DEGs to identify key leaf growth regulators. Yeast one-hybrid system, electrophoretic mobility shift assay, and dual-luciferase assay were employed to substantiate that PpnGRF5-1 directly regulated PpnCKX1. The interactions between PpnGRF5-1 and growth-regulating factor (GRF)-interacting factors (GIFs) were experimentally validated and a multilayered hierarchical regulatory network (ML-hGRN)-mediated by PpnGRF5-1 was constructed with top-down graphic Gaussian model (GGM) algorithm by combining RNA-sequencing data from its overexpression lines and DAP-sequencing data. PpnGRF5-1 is a negative regulator of PpnCKX1. Overexpression of PpnGRF5-1 in diploid transgenic lines resulted in larger leaves resembling those of triploids, and significantly increased zeatin and isopentenyladenine in the apical buds and third leaves. PpnGRF5-1 also interacted with GIFs to increase its regulatory diversity and capacity. An ML-hGRN-mediated by PpnGRF5-1 was obtained and could largely elucidate larger leaves. PpnGRF5-1 and the ML-hGRN-mediated by PpnGRF5-1 were underlying the leaf growth and development

    Xylitol production from xylose mother liquor: a novel strategy that combines the use of recombinant Bacillus subtilis and Candida maltosa

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Xylose mother liquor has high concentrations of xylose (35%-40%) as well as other sugars such as L-arabinose (10%-15%), galactose (8%-10%), glucose (8%-10%), and other minor sugars. Due to the complexity of this mother liquor, further isolation of xylose by simple method is not possible. In China, more than 50,000 metric tons of xylose mother liquor was produced in 2009, and the management of sugars like xylose that present in the low-cost liquor is a problem.</p> <p>Results</p> <p>We designed a novel strategy in which <it>Bacillus subtilis </it>and <it>Candida maltosa </it>were combined and used to convert xylose in this mother liquor to xylitol, a product of higher value. First, the xylose mother liquor was detoxified with the yeast <it>C. maltosa </it>to remove furfural and 5-hydromethylfurfural (HMF), which are inhibitors of <it>B. subtilis </it>growth. The glucose present in the mother liquor was also depleted by this yeast, which was an added advantage because glucose causes carbon catabolite repression in <it>B. subtilis</it>. This detoxification treatment resulted in an inhibitor-free mother liquor, and the <it>C. maltosa </it>cells could be reused as biocatalysts at a later stage to reduce xylose to xylitol. In the second step, a recombinant <it>B. subtilis </it>strain with a disrupted xylose isomerase gene was constructed. The detoxified xylose mother liquor was used as the medium for recombinant <it>B. subtilis </it>cultivation, and this led to L-arabinose depletion and xylose enrichment of the medium. In the third step, the xylose was further reduced to xylitol by <it>C. maltosa </it>cells, and crystallized xylitol was obtained from this yeast transformation medium. <it>C. maltosa </it>transformation of the xylose-enriched medium resulted in xylitol with 4.25 g L<sup>-1</sup>·h<sup>-1 </sup>volumetric productivity and 0.85 g xylitol/g xylose specific productivity.</p> <p>Conclusion</p> <p>In this study, we developed a biological method for the purification of xylose from xylose mother liquor and subsequent preparation of xylitol by <it>C. maltosa</it>-mediated biohydrogenation of xylose.</p

    Learning to Decompose Visual Features with Latent Textual Prompts

    Full text link
    Recent advances in pre-training vision-language models like CLIP have shown great potential in learning transferable visual representations. Nonetheless, for downstream inference, CLIP-like models suffer from either 1) degraded accuracy and robustness in the case of inaccurate text descriptions during retrieval-based inference (the challenge for zero-shot protocol); or 2) breaking the well-established vision-language alignment (the challenge for linear probing). To address them, we propose Decomposed Feature Prompting (DeFo). DeFo leverages a flexible number of learnable embeddings as textual input while maintaining the vision-language dual-model architecture, which enables the model to learn decomposed visual features with the help of feature-level textual prompts. We further use an additional linear layer to perform classification, allowing a scalable size of language inputs. Our empirical study shows DeFo's significance in improving the vision-language models. For example, DeFo obtains 73.2% test accuracy on ImageNet with a ResNet-50 backbone without tuning any pretrained weights of both the vision and language encoder, outperforming zero-shot CLIP by a large margin of 15.0%, and outperforming state-of-the-art vision-language prompt tuning method by 7.6%

    Label Informed Contrastive Pretraining for Node Importance Estimation on Knowledge Graphs

    Full text link
    Node Importance Estimation (NIE) is a task of inferring importance scores of the nodes in a graph. Due to the availability of richer data and knowledge, recent research interests of NIE have been dedicating to knowledge graphs for predicting future or missing node importance scores. Existing state-of-the-art NIE methods train the model by available labels, and they consider every interested node equally before training. However, the nodes with higher importance often require or receive more attention in real-world scenarios, e.g., people may care more about the movies or webpages with higher importance. To this end, we introduce Label Informed ContrAstive Pretraining (LICAP) to the NIE problem for being better aware of the nodes with high importance scores. Specifically, LICAP is a novel type of contrastive learning framework that aims to fully utilize the continuous labels to generate contrastive samples for pretraining embeddings. Considering the NIE problem, LICAP adopts a novel sampling strategy called top nodes preferred hierarchical sampling to first group all interested nodes into a top bin and a non-top bin based on node importance scores, and then divide the nodes within top bin into several finer bins also based on the scores. The contrastive samples are generated from those bins, and are then used to pretrain node embeddings of knowledge graphs via a newly proposed Predicate-aware Graph Attention Networks (PreGAT), so as to better separate the top nodes from non-top nodes, and distinguish the top nodes within top bin by keeping the relative order among finer bins. Extensive experiments demonstrate that the LICAP pretrained embeddings can further boost the performance of existing NIE methods and achieve the new state-of-the-art performance regarding both regression and ranking metrics. The source code for reproducibility is available at https://github.com/zhangtia16/LICAPComment: Accepted by IEEE TNNL

    Fossil Image Identification using Deep Learning Ensembles of Data Augmented Multiviews

    Full text link
    Identification of fossil species is crucial to evolutionary studies. Recent advances from deep learning have shown promising prospects in fossil image identification. However, the quantity and quality of labeled fossil images are often limited due to fossil preservation, conditioned sampling, and expensive and inconsistent label annotation by domain experts, which pose great challenges to the training of deep learning based image classification models. To address these challenges, we follow the idea of the wisdom of crowds and propose a novel multiview ensemble framework, which collects multiple views of each fossil specimen image reflecting its different characteristics to train multiple base deep learning models and then makes final decisions via soft voting. We further develop OGS method that integrates original, gray, and skeleton views under this framework to demonstrate the effectiveness. Experimental results on the fusulinid fossil dataset over five deep learning based milestone models show that OGS using three base models consistently outperforms the baseline using a single base model, and the ablation study verifies the usefulness of each selected view. Besides, OGS obtains the superior or comparable performance compared to the method under well-known bagging framework. Moreover, as the available training data decreases, the proposed framework achieves more performance gains compared to the baseline. Furthermore, a consistency test with two human experts shows that OGS obtains the highest agreement with both the labels of dataset and the two experts. Notably, this methodology is designed for general fossil identification and it is expected to see applications on other fossil datasets. The results suggest the potential application when the quantity and quality of labeled data are particularly restricted, e.g., to identify rare fossil images.Comment: preprint submitted to Methods in Ecology and Evolutio
    corecore