68 research outputs found

    Dysregulation of protein succinylation and disease development

    Get PDF
    As a novel post-translational modification of proteins, succinylation is widely present in both prokaryotes and eukaryotes. By regulating protein translocation and activity, particularly involved in regulation of gene expression, succinylation actively participates in diverse biological processes such as cell proliferation, differentiation and metabolism. Dysregulation of succinylation is closely related to many diseases. Consequently, it has increasingly attracted attention from basic and clinical researchers. For a thorough understanding of succinylation dysregulation and its implications for disease development, such as inflammation, tumors, cardiovascular and neurological diseases, this paper provides a comprehensive review of the research progress on abnormal succinylation. This understanding of association of dysregulation of succinylation with pathological processes will provide valuable directions for disease prevention/treatment strategies as well as drug development

    Virus-induced host genomic remodeling dysregulates gene expression, triggering tumorigenesis

    Get PDF
    Virus-induced genomic remodeling and altered gene expression contribute significantly to cancer development. Some oncogenic viruses such as Human papillomavirus (HPV) specifically trigger certain cancers by integrating into the host’s DNA, disrupting gene regulation linked to cell growth and migration. The effect can be through direct integration of viral genomes into the host genome or through indirect modulation of host cell pathways/proteins by viral proteins. Viral proteins also disrupt key cellular processes like apoptosis and DNA repair by interacting with host molecules, affecting signaling pathways. These disruptions lead to mutation accumulation and tumorigenesis. This review focuses on recent studies exploring virus-mediated genomic structure, altered gene expression, and epigenetic modifications in tumorigenesis

    书目记录的功能需求 : 最终报告

    Get PDF
    This is a Chinese translation of FRBR (Functional Requirements for Bibliographic Records

    Data of expression and purification of recombinant Taq DNA polymerase

    Get PDF
    Polymerase chain reaction (PCR) technique is widely used in many experimental conditions, and Taq DNA polymerase is critical in PCR process. In this article, the Taq DNA polymerase expression plasmid is reconstructed and the protein product is obtained by rapid purification, (“Rapid purification of high-activity Taq DNA polymerase” (Pluthero, 1993 [1]), “Single-step purification of a thermostable DNA polymerase expressed in Escherichia coli” (Desai and Pfaffle, 1995 [2])). Here we present the production data from protein expression and provide the analysis results of the production from two different vectors. Meanwhile, the purification data is also provided to show the purity of the protein product

    Boosting Noise Reduction Effect via Unsupervised Fine-Tuning Strategy

    No full text
    Over the last decade, supervised denoising models, trained on extensive datasets, have exhibited remarkable performance in image denoising, owing to their superior denoising effects. However, these models exhibit limited flexibility and manifest varying degrees of degradation in noise reduction capability when applied in practical scenarios, particularly when the noise distribution of a given noisy image deviates from that of the training images. To tackle this problem, we put forward a two-stage denoising model that is actualized by attaching an unsupervised fine-tuning phase after a supervised denoising model processes the input noisy image and secures a denoised image (regarded as a preprocessed image). More specifically, in the first stage we replace the convolution block adopted by the U-shaped network framework (utilized in the deep image prior method) with the Transformer module, and the resultant model is referred to as a U-Transformer. The U-Transformer model is trained to preprocess the input noisy images using noisy images and their labels. As for the second stage, we condense the supervised U-Transformer model into a simplified version, incorporating only one Transformer module with fewer parameters. Additionally, we shift its training mode to unsupervised training, following a similar approach as employed in the deep image prior method. This stage aims to further eliminate minor residual noise and artifacts present in the preprocessed image, resulting in clearer and more realistic output images. Experimental results illustrate that the proposed method achieves significant noise reduction in both synthetic and real images, surpassing state-of-the-art methods. This superiority stems from the supervised model’s ability to rapidly process given noisy images, while the unsupervised model leverages its flexibility to generate a fine-tuned network, enhancing noise reduction capability. Moreover, with support from the supervised model providing higher-quality preprocessed images, the proposed unsupervised fine-tuning model requires fewer parameters, facilitating rapid training and convergence, resulting in overall high execution efficiency

    CUL1-Mediated Organelle Fission Pathway Inhibits the Development of Chronic Obstructive Pulmonary Disease

    No full text
    Chronic obstructive pulmonary disease (COPD) is a global high-incidence chronic airway inflammation disease. Its deterioration will lead to more serious lung lesions and even lung cancer. Therefore, it is urgent to determine the pathogenesis of COPD and find potential therapeutic targets. The purpose of this study is to reveal the molecular mechanism of COPD disease development through in-depth analysis of transcription factors and ncRNA-driven pathogenic modules of COPD. We obtained the expression profile of COPD-related microRNAs from the NCBI-GEO database and analyzed the differences among groups to identify the microRNAs significantly associated with COPD. Then, their target genes are predicted and mapped to a protein-protein interaction (PPI) network. Finally, key transcription factors and the ncRNA of the regulatory module were identified based on the hypergeometric test. The results showed that CUL1 was the most interactive gene in the highly interactive module, so it was recognized as a dysfunctional molecule of COPD. Enrichment analysis also showed that it was much involved in the biological process of organelle fission, the highest number of regulatory modules. In addition, ncRNAs, mainly composed of miR-590-3p, miR-495-3p, miR-186-5p, and transcription factors such as MYC, BRCA1, and CDX2, significantly regulate COPD dysfunction blocks. In summary, we revealed that the COPD-related target gene CUL1 plays a key role in the potential dysfunction of the disease. It promotes the proliferation of fibroblast cells in COPD patients by mediating functional signals of organelle fission and thus participates in the progress of the disease. Our research helps biologists to further understand the etiology and development trend of COPD

    Six2 Plays an Intrinsic Role in Regulating Proliferation of Mesenchymal Cells in the Developing Palate

    No full text
    Cleft palate is a common congenital abnormality that results from defective secondary palate (SP) formation. The Sine oculis-related homeobox 2 (Six2) gene has been linked to abnormalities of craniofacial and kidney development. Our current study examined, for the first time, the specific role of Six2 in embryonic mouse SP development. Six2 mRNA and protein expression were identified in the palatal shelves from embryonic days (E)12.5 to E15.5, with peak levels during early stages of palatal shelf outgrowth. Immunohistochemical staining (IHC) showed that Six2 protein is abundant throughout the mesenchyme in the oral half of each palatal shelf, whereas there is a pronounced decline in Six2 expression by mesenchyme cells in the nasal half of the palatal shelf by stages E14.5–15.5. An opposite pattern was observed in the surface epithelium of the palatal shelf. Six2 expression was prominent at all stages in the epithelial cell layer located on the nasal side of each palatal shelf but absent from the epithelium located on the oral side of the palatal shelf. Six2 is a putative downstream target of transcription factor Hoxa2 and we previously demonstrated that Hoxa2 plays an intrinsic role in embryonic palate formation. We therefore investigated whether Six2 expression was altered in the developing SP of Hoxa2 null mice. Reverse transcriptase PCR and Western blot analyses revealed that Six2 mRNA and protein levels were upregulated in Hoxa2−/− palatal shelves at stages E12.5–14.5. Moreover, the domain of Six2 protein expression in the palatal mesenchyme of Hoxa2−/− embryos was expanded to include the entire nasal half of the palatal shelf in addition to the oral half. The palatal shelves of Hoxa2−/− embryos displayed a higher density of proliferating, Ki-67 positive palatal mesenchyme cells, as well as a higher density of Six2/Ki-67 double-positive cells. Furthermore, Hoxa2−/− palatal mesenchyme cells in culture displayed both increased proliferation and elevated Cyclin D1 expression relative to wild-type cultures. Conversely, siRNA-mediated Six2 knockdown restored proliferation and Cyclin D1 expression in Hoxa2−/− palatal mesenchyme cultures to near wild-type levels. Our findings demonstrate that Six2 functions downstream of Hoxa2 as a positive regulator of mesenchymal cell proliferation during SP development

    Boosting the Performance of LLIE Methods via Unsupervised Weight Map Generation Network

    No full text
    Over the past decade, significant advancements have been made in low-light image enhancement (LLIE) methods due to the robust capabilities of deep learning in non-linear mapping, feature extraction, and representation. However, the pursuit of a universally superior method that consistently outperforms others across diverse scenarios remains challenging. This challenge primarily arises from the inherent data bias in deep learning-based approaches, stemming from disparities in image statistical distributions between training and testing datasets. To tackle this problem, we propose an unsupervised weight map generation network aimed at effectively integrating pre-enhanced images generated from carefully selected complementary LLIE methods. Our ultimate goal is to enhance the overall enhancement performance by leveraging these pre-enhanced images, therewith culminating the enhancement workflow in a dual-stage execution paradigm. To be more specific, in the preprocessing stage, we initially employ two distinct LLIE methods, namely Night and PairLIE, chosen specifically for their complementary enhancement characteristics, to process the given input low-light image. The resultant outputs, termed pre-enhanced images, serve as dual target images for fusion in the subsequent image fusion stage. Subsequently, at the fusion stage, we utilize an unsupervised UNet architecture to determine the optimal pixel-level weight maps for merging the pre-enhanced images. This process is adeptly directed by a specially formulated loss function in conjunction with the no-reference image quality algorithm, namely the naturalness image quality evaluator (NIQE). Finally, based on a mixed weighting mechanism that combines generated pixel-level local weights with image-level global empirical weights, the pre-enhanced images are fused to produce the final enhanced image. Our experimental findings demonstrate exceptional performance across a range of datasets, surpassing various state-of-the-art methods, including two pre-enhancement methods, involved in the comparison. This outstanding performance is attributed to the harmonious integration of diverse LLIE methods, which yields robust and high-quality enhancement outcomes across various scenarios. Furthermore, our approach exhibits scalability and adaptability, ensuring compatibility with future advancements in enhancement technologies while maintaining superior performance in this rapidly evolving field

    A Masked-Pre-Training-Based Fast Deep Image Prior Denoising Model

    No full text
    Compared to supervised denoising models based on deep learning, the unsupervised Deep Image Prior (DIP) denoising approach offers greater flexibility and practicality by operating solely with the given noisy image. However, the random initialization of network input and network parameters in the DIP leads to a slow convergence during iterative training, affecting the execution efficiency heavily. To address this issue, we propose the Masked-Pre-training-Based Fast DIP (MPFDIP) Denoising Model in this paper. We enhance the classical Restormer framework by improving its Transformer core module and incorporating sampling, residual learning, and refinement techniques. This results in a fast network called FRformer (Fast Restormer). The FRformer model undergoes offline supervised training using the masked processing technique for pre-training. For a specific noisy image, the pre-trained FRformer network, with its learned parameters, replaces the UNet network used in the original DIP model. The online iterative training of the replaced model follows the DIP unsupervised training approach, utilizing multi-target images and an adaptive loss function. This strategy further improves the denoising effectiveness of the pre-trained model. Extensive experiments demonstrate that the MPFDIP model outperforms existing mainstream deep-learning-based denoising models in reducing Gaussian noise, mixed Gaussian–Poisson noise, and low-dose CT noise. It also significantly enhances the execution efficiency compared to the original DIP model. This improvement is mainly attributed to the FRformer network’s initialization parameters obtained through masked pre-training, which exhibit strong generalization capabilities for various types and intensities of noise and already provide some denoising effect. Using them as initialization parameters greatly improves the convergence speed of unsupervised iterative training in the DIP. Additionally, the techniques of multi-target images and the adaptive loss function further enhance the denoising process
    corecore