249 research outputs found

    Genetic analysis and QTL mapping of aroma volatile compounds in the apple progeny ‘Fuji’ × ‘Cripps Pink’

    Get PDF
    Aroma is an essential trait for apple fruit quality, but the understanding of biochemical mechanisms underlying aroma formation is still limited. To better characterize and assess the genetic potential for improving aroma quality for breeding, many efforts have been paid to map quantitative trait loci (QTLs) using a saturated molecular linkage map. In the present study, aroma profiles in ripe fruit of F1 population between ‘Fuji’ and ‘Cripps Pink’ were evaluated by gas chromatography-mass spectrometry (GC-MS) over 2019 and 2020 years, and the genetics of volatile compounds were dissected. In total, 38 volatile compounds were identified in ‘Fuji’ × ‘Cripps Pink’ population, including 23 esters, 3 alcohols, 7 aldehydes and 5 others. With the combination of aroma phenotypic data and constructed genetic linkage map, 87 QTLs were detected for 15 volatile compounds on 14 linkage groups (LGs). Among them, a set of QTLs associated with ester production identified and confirmed on LG 6. A candidate gene MdAAT6 in the QTL mapping interval was detected. Over-expression of MdAAT6 in tomato and apple fruits showed significantly higher esters accumulation compared to the control, indicating it was critical for the ester production. Our results give light on the mode of inheritance of the apple volatilome and provide new insights for apple flavor improvement in the future

    Effect of glaucocalyxin B on the protein expressions of PTEN, Beclin1 and LC3 in a mouse model of transplanted cervical cancer, and its significance

    Get PDF
    Purpose: To determine the effect of glaucocalyxin B (GLB) on the protein expressions of PTEN, Beclin1 and LC3 in a mouse model of transplanted cervical cancer, and its significance.Methods: A mouse model of transplanted cervical cancer was established in female BALB/C mice. The model mice were divided into control group, low-dose GLB group and high-dose GLB group. Mice in low-dose and high-dose groups were given intraperitoneal injection of low-dose GLB and high-dose GLB, respectively. The volume and weight of transplanted tumor were measured and compared between the two groups. Serum levels of CEA and CA125 were assayed by enzyme-linked immunosorbent assay (ELISA). The expressions of phosphatase and tensin homolog (PTEN), autophagy-related factor microtubule-associated protein-1 (Beclin-1), microtubule-associated protein 1 light chain 3 (LC3), apoptosis-related protein p53, and Bax were determined using SABC immunohistochemical operation.Results: On days 5, 10 and 15, the volume and weight of transplanted tumor, and levels of CA125 and CEA in low- and high-dose GLB groups were significantly and dose-dependently lower than those in control group (p < 0.05). Results from immunohistochemistry showed that the protein expression levels of PTEN, Beclin-1, LC3, p53 and Bax were significantly and dose-dependently higher in low- and highdose GLB groups than in the control group (p < 0.05).Conclusion: Glaucocalyxin B significantly and dose-dependently induces apoptosis of cervical cancer cells and inhibits their growth by regulating the protein expressions of PTEN, Beclin1 and LC3. Thus, glaucocalyxin B is a potential adjunct therapy in the management of cervical cancer

    Eliminating Gradient Conflict in Reference-based Line-Art Colorization

    Full text link
    Reference-based line-art colorization is a challenging task in computer vision. The color, texture, and shading are rendered based on an abstract sketch, which heavily relies on the precise long-range dependency modeling between the sketch and reference. Popular techniques to bridge the cross-modal information and model the long-range dependency employ the attention mechanism. However, in the context of reference-based line-art colorization, several techniques would intensify the existing training difficulty of attention, for instance, self-supervised training protocol and GAN-based losses. To understand the instability in training, we detect the gradient flow of attention and observe gradient conflict among attention branches. This phenomenon motivates us to alleviate the gradient issue by preserving the dominant gradient branch while removing the conflict ones. We propose a novel attention mechanism using this training strategy, Stop-Gradient Attention (SGA), outperforming the attention baseline by a large margin with better training stability. Compared with state-of-the-art modules in line-art colorization, our approach demonstrates significant improvements in Fr\'echet Inception Distance (FID, up to 27.21%) and structural similarity index measure (SSIM, up to 25.67%) on several benchmarks. The code of SGA is available at https://github.com/kunkun0w0/SGA .Comment: Accepted by ECCV202

    LightXML: Transformer with Dynamic Negative Sampling for High-Performance Extreme Multi-label Text Classification

    Full text link
    Extreme Multi-label text Classification (XMC) is a task of finding the most relevant labels from a large label set. Nowadays deep learning-based methods have shown significant success in XMC. However, the existing methods (e.g., AttentionXML and X-Transformer etc) still suffer from 1) combining several models to train and predict for one dataset, and 2) sampling negative labels statically during the process of training label ranking model, which reduces both the efficiency and accuracy of the model. To address the above problems, we proposed LightXML, which adopts end-to-end training and dynamic negative labels sampling. In LightXML, we use generative cooperative networks to recall and rank labels, in which label recalling part generates negative and positive labels, and label ranking part distinguishes positive labels from these labels. Through these networks, negative labels are sampled dynamically during label ranking part training by feeding with the same text representation. Extensive experiments show that LightXML outperforms state-of-the-art methods in five extreme multi-label datasets with much smaller model size and lower computational complexity. In particular, on the Amazon dataset with 670K labels, LightXML can reduce the model size up to 72% compared to AttentionXML
    • …
    corecore