555 research outputs found

    Curcumin inhibits epithelial-mesenchymal transition in colorectal cancer cells by regulating miR-206/SNAI2 pathway

    Get PDF
    Purpose: To examine the effects of curcumin on epithelial-mesenchymal transition (EMT) via regulation of miR-206 and SNAI2 in colorectal cancer (CRC) cells. Relationship between SNAI2 and miR-206 and the effects of curcumin on related mechanisms were also identified. Methods: Transwell assays were used to analyze cellular migration and invasion. Genes associated with changes in protein and mRNA expression were evaluated by western blotting and quantitative reverse transcription PCR analyses, respectively. The relationship between SNAI2 and miR-206 was determined using a dual luciferase assay. Results: Curcumin inhibited cell metastasis, upregulated miR-206 expression, and decreased SNAI2 levels. Furthermore, miR-206 directly targeted SNAI2 and inhibited EMT via downregulation of SNAI2 expression. Curcumin inhibited EMT in CRC cells by upregulating miR-206. Conclusion: This study, for the first time, discovered the role of curcumin on epithelial-mesenchymal transition process in colorectal cancer cells by modulating miR-206/SNAI2 axis. These findings suggest that curcumin may be useful as a novel therapeutic agent to inhibit the metastasis of CRC

    CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation

    Full text link
    Vision-Language Pretraining (VLP) has shown impressive results on diverse downstream tasks by offline training on large-scale datasets. Regarding the growing nature of real-world data, such an offline training paradigm on ever-expanding data is unsustainable, because models lack the continual learning ability to accumulate knowledge constantly. However, most continual learning studies are limited to uni-modal classification and existing multi-modal datasets cannot simulate continual non-stationary data stream scenarios. To support the study of Vision-Language Continual Pretraining (VLCP), we first contribute a comprehensive and unified benchmark dataset P9D which contains over one million product image-text pairs from 9 industries. The data from each industry as an independent task supports continual learning and conforms to the real-world long-tail nature to simulate pretraining on web data. We comprehensively study the characteristics and challenges of VLCP, and propose a new algorithm: Compatible momentum contrast with Topology Preservation, dubbed CTP. The compatible momentum model absorbs the knowledge of the current and previous-task models to flexibly update the modal feature. Moreover, Topology Preservation transfers the knowledge of embedding across tasks while preserving the flexibility of feature adjustment. The experimental results demonstrate our method not only achieves superior performance compared with other baselines but also does not bring an expensive training burden. Dataset and codes are available at https://github.com/KevinLight831/CTP.Comment: Accepted by ICCV 2023. Code: https://github.com/KevinLight831/CT

    DLCA-Recon: Dynamic Loose Clothing Avatar Reconstruction from Monocular Videos

    Full text link
    Reconstructing a dynamic human with loose clothing is an important but difficult task. To address this challenge, we propose a method named DLCA-Recon to create human avatars from monocular videos. The distance from loose clothing to the underlying body rapidly changes in every frame when the human freely moves and acts. Previous methods lack effective geometric initialization and constraints for guiding the optimization of deformation to explain this dramatic change, resulting in the discontinuous and incomplete reconstruction surface. To model the deformation more accurately, we propose to initialize an estimated 3D clothed human in the canonical space, as it is easier for deformation fields to learn from the clothed human than from SMPL. With both representations of explicit mesh and implicit SDF, we utilize the physical connection information between consecutive frames and propose a dynamic deformation field (DDF) to optimize deformation fields. DDF accounts for contributive forces on loose clothing to enhance the interpretability of deformations and effectively capture the free movement of loose clothing. Moreover, we propagate SMPL skinning weights to each individual and refine pose and skinning weights during the optimization to improve skinning transformation. Based on more reasonable initialization and DDF, we can simulate real-world physics more accurately. Extensive experiments on public and our own datasets validate that our method can produce superior results for humans with loose clothing compared to the SOTA methods

    Progressive Semantic-Visual Mutual Adaption for Generalized Zero-Shot Learning

    Full text link
    Generalized Zero-Shot Learning (GZSL) identifies unseen categories by knowledge transferred from the seen domain, relying on the intrinsic interactions between visual and semantic information. Prior works mainly localize regions corresponding to the sharing attributes. When various visual appearances correspond to the same attribute, the sharing attributes inevitably introduce semantic ambiguity, hampering the exploration of accurate semantic-visual interactions. In this paper, we deploy the dual semantic-visual transformer module (DSVTM) to progressively model the correspondences between attribute prototypes and visual features, constituting a progressive semantic-visual mutual adaption (PSVMA) network for semantic disambiguation and knowledge transferability improvement. Specifically, DSVTM devises an instance-motivated semantic encoder that learns instance-centric prototypes to adapt to different images, enabling the recast of the unmatched semantic-visual pair into the matched one. Then, a semantic-motivated instance decoder strengthens accurate cross-domain interactions between the matched pair for semantic-related instance adaption, encouraging the generation of unambiguous visual representations. Moreover, to mitigate the bias towards seen classes in GZSL, a debiasing loss is proposed to pursue response consistency between seen and unseen predictions. The PSVMA consistently yields superior performances against other state-of-the-art methods. Code will be available at: https://github.com/ManLiuCoder/PSVMA.Comment: Accepted by CVPR202

    Structural dynamic model updating based on Kriging model using frequency response data

    Get PDF
    Metamodel technique is attracting more and more attention in structural dynamic model updating. In this paper, an attempt is made to explore the effectiveness of Kriging method for acceleration frequency response function based model updating. A Kriging model is constructed based on the input variables selected by F-test method specially, which is applied to the results of design of experiment. The response of design of experiment is obtained based on the errors between acceleration response curves of analytical model and experimental model. Two examples of representative structure are discussed, the comparison of updated results of different metamodel shows that a less error of updated results can be obtained based on Kriging model, and the updated analytical model has a good prediction capability. It can be concluded that the Kriging model is suitable for the frequency response function based model updating

    Bright solitons in a spin-orbit-coupled dipolar Bose-Einstein condensate trapped within a double-lattice

    Full text link
    By effectively controlling the dipole-dipole interaction, we investigate the characteristics of the ground state of bright solitons in a spin-orbit coupled dipolar Bose-Einstein condensate. The dipolar atoms are trapped within a double-lattice which consists of a linear and a nonlinear lattice. We derive the motion equations of the different spin components, taking the controlling mechanisms of the diolpe-dipole interaction into account. An analytical expression of dipole-dipole interaction is derived. By adjusting the dipole polarization angle, the dipole interaction can be adjusted from attraction to repulsion. On this basis, we study the generation and manipulation of the bright solitons using both the analytical variational method and numerical imaginary time evolution. The stability of the bright solitons is also analyzed and we map out the stability phase diagram. By adjusting the long-range dipole-dipole interaction, one can achieve manipulation of bright solitons in all aspects, including the existence, width, nodes, and stability. Considering the complexity of our system, our results will have enormous potential applications in quantum simulation of complex systems

    You Can Mask More For Extremely Low-Bitrate Image Compression

    Full text link
    Learned image compression (LIC) methods have experienced significant progress during recent years. However, these methods are primarily dedicated to optimizing the rate-distortion (R-D) performance at medium and high bitrates (> 0.1 bits per pixel (bpp)), while research on extremely low bitrates is limited. Besides, existing methods fail to explicitly explore the image structure and texture components crucial for image compression, treating them equally alongside uninformative components in networks. This can cause severe perceptual quality degradation, especially under low-bitrate scenarios. In this work, inspired by the success of pre-trained masked autoencoders (MAE) in many downstream tasks, we propose to rethink its mask sampling strategy from structure and texture perspectives for high redundancy reduction and discriminative feature representation, further unleashing the potential of LIC methods. Therefore, we present a dual-adaptive masking approach (DA-Mask) that samples visible patches based on the structure and texture distributions of original images. We combine DA-Mask and pre-trained MAE in masked image modeling (MIM) as an initial compressor that abstracts informative semantic context and texture representations. Such a pipeline can well cooperate with LIC networks to achieve further secondary compression while preserving promising reconstruction quality. Consequently, we propose a simple yet effective masked compression model (MCM), the first framework that unifies MIM and LIC end-to-end for extremely low-bitrate image compression. Extensive experiments have demonstrated that our approach outperforms recent state-of-the-art methods in R-D performance, visual quality, and downstream applications, at very low bitrates. Our code is available at https://github.com/lianqi1008/MCM.git.Comment: Under revie
    corecore