92 research outputs found

    Overexpression of candidate tumor suppressor ECRG4 inhibits glioma proliferation and invasion

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>ECRG4 has been shown to be a candidate tumor suppressor in several tumors, but its role in glioma remains poorly understood. In this study, we examined the mRNA expression of ECRG4 and investigated its biological role in glioma cells.</p> <p>Methods</p> <p>Real-time PCR was used to examine expression of ECRG4 in gliomas and their matched brain tissues. The effect of ECRG4 expression on cell proliferation, invasion, and migration was investigated in human U251 glioma cells. Finally, the regulation of transcription factor NF-kB by ECRG4 was evaluated by western blotting.</p> <p>Results</p> <p>Of the 10 paired samples analyzed, 9 glioma tissues displayed the decreased expression of ECRG4 compared to matched normal brain tissues. Cells transfected with ECRG4 showed significantly decreased cell proliferation as evaluated by MTT and colony formation assays. Furthermore, overexpression inhibited cell migration and invasion in transwell and Boyden chamber experiments and retarded the cell cycle progression from G1 to S phase by FACSCaliber cytometry. Protein levels of nuclear transcription factor NF-kB, which is involved in cell proliferation, inversely correlated with ECRG4 expression.</p> <p>Conclusion</p> <p>Our data suggest that ECRG4 serves as a tumor suppressor in glioma.</p

    Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning

    Full text link
    Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct corresponding information across different modalities. For this purpose, inspired by the success of masked language modeling (MLM) tasks in the NLP pre-training area, numerous masked modeling tasks have been proposed for VLP to further promote cross-modal interactions. The core idea of previous masked modeling tasks is to focus on reconstructing the masked tokens based on visible context for learning local-to-local alignment. However, most of them pay little attention to the global semantic features generated for the masked data, resulting in the limited cross-modal alignment ability of global representations. Therefore, in this paper, we propose a novel Semantic Completion Learning (SCL) task, complementary to existing masked modeling tasks, to facilitate global-to-local alignment. Specifically, the SCL task complements the missing semantics of masked data by capturing the corresponding information from the other modality, promoting learning more representative global features which have a great impact on the performance of downstream tasks. Moreover, we present a flexible vision encoder, which enables our model to perform image-text and video-text multimodal tasks simultaneously. Experimental results show that our proposed method obtains state-of-the-art performance on various vision-language benchmarks, such as visual question answering, image-text retrieval, and video-text retrieval

    MAP: Multimodal Uncertainty-Aware Vision-Language Pre-training Model

    Full text link
    Multimodal semantic understanding often has to deal with uncertainty, which means the obtained messages tend to refer to multiple targets. Such uncertainty is problematic for our interpretation, including inter- and intra-modal uncertainty. Little effort has studied the modeling of this uncertainty, particularly in pre-training on unlabeled datasets and fine-tuning in task-specific downstream datasets. In this paper, we project the representations of all modalities as probabilistic distributions via a Probability Distribution Encoder (PDE) by utilizing sequence-level interactions. Compared to the existing deterministic methods, such uncertainty modeling can convey richer multimodal semantic information and more complex relationships. Furthermore, we integrate uncertainty modeling with popular pre-training frameworks and propose suitable pre-training tasks: Distribution-based Vision-Language Contrastive learning (D-VLC), Distribution-based Masked Language Modeling (D-MLM), and Distribution-based Image-Text Matching (D-ITM). The fine-tuned models are applied to challenging downstream tasks, including image-text retrieval, visual question answering, visual reasoning, and visual entailment, and achieve state-of-the-art results.Comment: CVPR 2023 accep

    LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment

    Full text link
    The video-language (VL) pretraining has achieved remarkable improvement in multiple downstream tasks. However, the current VL pretraining framework is hard to extend to multiple modalities (N modalities, N>=3) beyond vision and language. We thus propose LanguageBind, taking the language as the bind across different modalities because the language modality is well-explored and contains rich semantics. Specifically, we freeze the language encoder acquired by VL pretraining, then train encoders for other modalities with contrastive learning. As a result, all modalities are mapped to a shared feature space, implementing multi-modal semantic alignment. While LanguageBind ensures that we can extend VL modalities to N modalities, we also need a high-quality dataset with alignment data pairs centered on language. We thus propose VIDAL-10M with Video, Infrared, Depth, Audio and their corresponding Language, naming as VIDAL-10M. In our VIDAL-10M, all videos are from short video platforms with complete semantics rather than truncated segments from long videos, and all the video, depth, infrared, and audio modalities are aligned to their textual descriptions. LanguageBind has achieved superior performance on a wide range of 15 benchmarks covering video, audio, depth, and infrared. Moreover, multiple experiments have provided evidence for the effectiveness of LanguageBind in achieving indirect alignment and complementarity among diverse modalities. Code address: https://github.com/PKU-YuanGroup/LanguageBindComment: Accepted by ICLR 202

    5-Fluorouracil targets thymidylate synthase in the selective suppression of TH17 cell differentiation

    Get PDF
    While it is well established that treatment of cancer patients with 5-Fluorouracil (5-FU) can result in immune suppression, the exact function of 5-FU in the modulation of immune cells has not been fully established. We found that low dose 5-FU selectively suppresses TH17 and TH1 cell differentiation without apparent effect on Treg, TH2, and significantly suppresses thymidylate synthase (TS) expression in TH17 and TH1 cells but has a lesser effect in tumor cells and macrophages. Interestingly, the basal expression of TS varies significantly between T helper phenotypes and knockdown of TS significantly impairs TH17 and TH1 cell differentiation without affecting the differentiation of either Treg or TH2 cells. Finally, low dose 5-FU is effective in ameliorating colitis development by suppressing TH17 and TH1 cell development in a T cell transfer colitis model. Taken together, the results highlight the importance of the anti-inflammatory functions of low dose 5-FU by selectively suppressing TH17 and TH1 immune responses

    Effect of staurosporine on the mobility and invasiveness of lung adenocarcinoma A549 cells: an in vitro study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Lung cancer is one of the most malignant tumors, representing a significant threat to human health. Lung cancer patients often exhibit tumor cell invasion and metastasis before diagnosis which often render current treatments ineffective. Here, we investigated the effect of staurosporine, a potent protein kinase C (PKC) inhibitor on the mobility and invasiveness of human lung adenocarcinoma A549 cells.</p> <p>Methods</p> <p>All experiments were conducted using human lung adenocarcinoma A549 cells that were either untreated or treated with 1 nmol/L, 10 nmol/L, or 100 nmol/L staurosporine. Electron microscopy analyses were performed to study ultrastructural differences between untreated A549 cells and A549 cells treated with staurosporine. The effect of staurosporine on the mobility and invasiveness of A549 was tested using Transwell chambers. Western blot analyses were performed to study the effect of staurosporine on the levels of PKC-α, integrin β1, E-cadherin, and LnR. Changes in MMP-9 and uPA levels were identified by fluorescence microscopy.</p> <p>Results</p> <p>We demonstrated that treatment of A549 cells with staurosporine caused alterations in the cell shape and morphology. Untreated cells were primarily short spindle- and triangle-shaped in contrast to staurosporine treated cells which were retracted and round-shaped. The latter showed signs of apoptosis, including vacuole fragmentation, chromatin degeneration, and a decrease in the number of microvilli at the surface of the cells. The A549 cell adhesion, mobility, and invasiveness significantly decreased with higher staurosporine concentrations. E-cadherin, integrin β1, and LnR levels changed by a factor of 1.5, 0.74, and 0.73, respectively compared to untreated cells. In addition, the levels of MMP-9 and uPA decreased in cells treated with staurosporine.</p> <p>Conclusion</p> <p>In summary, this study demonstrates that staurosporine inhibits cell adhesion, mobility, and invasion of A549 cells. The staurosporine-mediated inhibition of PKC-α, induction of E-Cad expression, and decreased integrin β1, LnR, MMP-9, and uPA levels could all possibly contribute to this biological process. These results represent a significant step forward in the ongoing effort to understand the development of lung carcinoma and to design novel strategies to inhibit metastasis of the tumor by targeting the cell-adhesion, mobility and invasion of tumor cells.</p
    • …
    corecore