284 research outputs found

    Isolation, characterization, and expression analysis of [beta]-1, 3-glucanase genes from strawberry plants

    Get PDF
    Plant beta-1, 3-glucanases are pathogenesis-related proteins, which are implicated in plant defense responses against pathogen infection. As an initial step in understanding the roles of beta-1, 3-glucanases in the strawberry plant defense system, genome walking, and 3\u27 and 5\u27 RACE were performed to isolate beta-1, 3-glucanase genomic and cDNA clones. In addition, real time PCR was performed to determine the expression levels of two of the isolated beta-1, 3-glucanase genes in healthy and fungal infected plants. Two genomic clones, FaBG2-1 and FaBG2-2, and a cDNA clone, FaBG2-3, encoding three different beta-1, 3-glucanases, were isolated. FaBG2-1 was comprised of two exons and one intron. The first exon of FaBG2-1 encodes the major part of a signal peptide. Results of Southern blotting analysis indicated that the strawberry genome contains several copies of FaBG2-1 or related genes. FaBG2-2 appears to be an intronless gene and does not encode a signal peptide. FaBG2-3, like FaBG2-1, also encodes a signal peptide, but is different from FaBG2-1 in 3\u27 and 5\u27 non-coding regions. The proteins encoded by these three genes share a high degree of sequence homology to plant class II beta-1, 3-glucanases. The expression of FaBG2-1 and FaBG2-3 in strawberry plants infected with Colletotrichum fragariae and Colletotrichum acutatum, two important strawberry fungal pathogens, were examined. High levels of induction of both genes were observed in plants infected with C. fragariae, whereas lower levels of induction were observed in plants infected with C. acutatum. Moreover, the expression of FaBG2-3 was much greater than FaBG2-1 in both the uninfected and the infected plants. The expressions of FaBG2-1 and FaBG2-3 in leaves, crowns, and roots were examined at different time points during a 7 month growth period. Different organs showed different expression patterns for the two genes. Furthermore, the total beta-1, 3-glucanase activity and isozyme pattern were analyzed. The isozyme patterns were different between the uninfected and the infected plants. Also, the differences were observed between young plants and older plants. This research shows that beta-1, 3-glucanase in strawberry plant may play roles in plant defense and plant development

    The Style and the Theme of Loss in Hemingway’ s Hills Like White Elephants

    Get PDF
    Hemingway’ s Short Story Hills Like White Elephants presents a simple story between an American man and a young woman. Under the simple plot lies strong conflict between protagonists. Through probing into its language techniques, repetition, documentary style, and girl’s loss of unborn child, her love, and her future, this paper aims to give an in-depth analysis of its style and theme of loss.Key words: Repetition; Documentary style; Los

    Fast Fourier Intrinsic Network

    Get PDF
    We address the problem of decomposing an image into albedo and shading. We propose the Fast Fourier Intrinsic Network, FFI-Net in short, that operates in the spectral domain, splitting the input into several spectral bands. Weights in FFI-Net are optimized in the spectral domain, allowing faster convergence to a lower error. FFI-Net is lightweight and does not need auxiliary networks for training. The network is trained end-to-end with a novel spectral loss which measures the global distance between the network prediction and corresponding ground truth. FFI-Net achieves state-of-the-art performance on MPI-Sintel, MIT Intrinsic, and IIW datasets.Comment: WACV 2021 - camera read

    Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond

    Full text link
    Recently, fine-tuning pre-trained code models such as CodeBERT on downstream tasks has achieved great success in many software testing and analysis tasks. While effective and prevalent, fine-tuning the pre-trained parameters incurs a large computational cost. In this paper, we conduct an extensive experimental study to explore what happens to layer-wise pre-trained representations and their encoded code knowledge during fine-tuning. We then propose efficient alternatives to fine-tune the large pre-trained code model based on the above findings. Our experimental study shows that (1) lexical, syntactic and structural properties of source code are encoded in the lower, intermediate, and higher layers, respectively, while the semantic property spans across the entire model. (2) The process of fine-tuning preserves most of the code properties. Specifically, the basic code properties captured by lower and intermediate layers are still preserved during fine-tuning. Furthermore, we find that only the representations of the top two layers change most during fine-tuning for various downstream tasks. (3) Based on the above findings, we propose Telly to efficiently fine-tune pre-trained code models via layer freezing. The extensive experimental results on five various downstream tasks demonstrate that training parameters and the corresponding time cost are greatly reduced, while performances are similar or better. Replication package including source code, datasets, and online Appendix is available at: \url{https://github.com/DeepSoftwareAnalytics/Telly}.Comment: Accepted by ISSTA 2023 (The 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis

    Enhancing Semantic Code Search with Multimodal Contrastive Learning and Soft Data Augmentation

    Full text link
    Code search aims to retrieve the most semantically relevant code snippet for a given natural language query. Recently, large-scale code pre-trained models such as CodeBERT and GraphCodeBERT learn generic representations of source code and have achieved substantial improvement on code search task. However, the high-quality sequence-level representations of code snippets have not been sufficiently explored. In this paper, we propose a new approach with multimodal contrastive learning and soft data augmentation for code search. Multimodal contrastive learning is used to pull together the representations of code-query pairs and push apart the unpaired code snippets and queries. Moreover, data augmentation is critical in contrastive learning for learning high-quality representations. However, only semantic-preserving augmentations for source code are considered in existing work. In this work, we propose to do soft data augmentation by dynamically masking and replacing some tokens in code sequences to generate code snippets that are similar but not necessarily semantic-preserving as positive samples for paired queries. We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages. The experimental results show that our approach significantly outperforms the state-of-the-art methods. We also adapt our techniques to several pre-trained models such as RoBERTa and CodeBERT, and significantly boost their performance on the code search task

    Make Heterophily Graphs Better Fit GNN: A Graph Rewiring Approach

    Full text link
    Graph Neural Networks (GNNs) are popular machine learning methods for modeling graph data. A lot of GNNs perform well on homophily graphs while having unsatisfactory performance on heterophily graphs. Recently, some researchers turn their attention to designing GNNs for heterophily graphs by adjusting the message passing mechanism or enlarging the receptive field of the message passing. Different from existing works that mitigate the issues of heterophily from model design perspective, we propose to study heterophily graphs from an orthogonal perspective by rewiring the graph structure to reduce heterophily and making the traditional GNNs perform better. Through comprehensive empirical studies and analysis, we verify the potential of the rewiring methods. To fully exploit its potential, we propose a method named Deep Heterophily Graph Rewiring (DHGR) to rewire graphs by adding homophilic edges and pruning heterophilic edges. The detailed way of rewiring is determined by comparing the similarity of label/feature-distribution of node neighbors. Besides, we design a scalable implementation for DHGR to guarantee high efficiency. DHRG can be easily used as a plug-in module, i.e., a graph pre-processing step, for any GNNs, including both GNN for homophily and heterophily, to boost their performance on the node classification task. To the best of our knowledge, it is the first work studying graph rewiring for heterophily graphs. Extensive experiments on 11 public graph datasets demonstrate the superiority of our proposed methods.Comment: 11 page
    • …
    corecore