175 research outputs found

    Anti cancer molecular mechanism of Actinidia chinensis Planch in gastric cancer based on network pharmacology and molecular docking

    Get PDF
    Purpose: To determine the anti-tumor effects of Actinidia chinensis Planch (ACP) root extract as well as its mechanism of action against gastric cancer (GC) using network pharmacology.Methods: The bioactive compounds and targets of ACP, as well as GC-related genes were identified from a series of public databases. Functional enrichment analysis was conducted to find relevant biological processes and pathways. The survival analysis was conducted using GEPIA tool. Autodock was used to carry out molecular docking between the ingredients and their targets.Results: A total of 20 bioactive compounds with 209 corresponding targets were identified for ACP, and a total of 871 GC-related genes were obtained. Forty-nine (49) targets of ACP were identified as candidate genes for the prevention of GC, and the PPI network with 584 interactions among these genes was constructed. The data demonstrated that the candidate targets were involved in multiple biological processes such as oxidative stress response, apoptosis, and proliferation. Moreover, these candidate targets were significantly associated with cancer-related pathways and signal transduction pathways. The compound-target-pathway network containing 16 bioactive compounds, 49 targets and 10 pathways was constructed and visualized, and the top 3 targets with a higher degree value were AKT1, MYC, and JUN, respectively. Survival analysis revealed significant associations between GC prognosis and several targets (PREP, PTGS1, AR, and PTGS2). Molecular docking further revealed good binding affinities between bioactive compounds and the prognosis-related targets, indicating the potential roles of these ingredient-target interactions in GC protection.Conclusion: Taken together, this study has provided novel clues for the determination of the antigastric cancer mechanism of ACP

    Residual Mixture of Experts

    Full text link
    Mixture of Experts (MoE) is able to scale up vision transformers effectively. However, it requires prohibiting computation resources to train a large MoE transformer. In this paper, we propose Residual Mixture of Experts (RMoE), an efficient training pipeline for MoE vision transformers on downstream tasks, such as segmentation and detection. RMoE achieves comparable results with the upper-bound MoE training, while only introducing minor additional training cost than the lower-bound non-MoE training pipelines. The efficiency is supported by our key observation: the weights of an MoE transformer can be factored into an input-independent core and an input-dependent residual. Compared with the weight core, the weight residual can be efficiently trained with much less computation resource, e.g., finetuning on the downstream data. We show that, compared with the current MoE training pipeline, we get comparable results while saving over 30% training cost. When compared with state-of-the-art non- MoE transformers, such as Swin-T / CvT-13 / Swin-L, we get +1.1 / 0.9 / 1.0 mIoU gain on ADE20K segmentation and +1.4 / 1.6 / 0.6 AP gain on MS-COCO object detection task with less than 3% additional training cost

    Truncated eigenvalue equation and long wavelength behavior of lattice gauge theory

    Full text link
    We review our new method, which might be the most direct and efficient way for approaching the continuum physics from Hamiltonian lattice gauge theory. It consists of solving the eigenvalue equation with a truncation scheme preserving the continuum limit. The efficiency has been confirmed by the observations of the scaling behaviors for the long wavelength vacuum wave functions and mass gaps in (2+1)-dimensional models and (1+1)-dimensional σ\sigma model even at very low truncation orders. Most of these results show rapid convergence to the available Monte Carlo data, ensuring the reliability of our method.Comment: Latex file, 4 pages, plus 4 figures encoded with uufile

    Deep Learning in Predicting Real Estate Property Prices: A Comparative Study

    Get PDF
    The dominant methods for real estate property price prediction or valuation are multi-regression based. Regression-based methods are, however, imperfect because they suffer from issues such as multicollinearity and heteroscedasticity. Recent years have witnessed the use of machine learning methods but the results are mixed. This paper introduces the application of a new approach using deep learning models to real estate property price prediction. The paper uses a deep learning approach for modeling to improve the accuracy of real estate property price prediction with data representing sales transactions in a large metropolitan area. Three deep learning models, LSTM, GRU and Transformer, are created and compared with other machine learning and traditional models. The results obtained for the data set with all features clearly show that the RF and Transformer models outperformed the other models. LSTM and GRU models produced the worst results, suggesting that they are perhaps not suitable to predict the real estate price. Furthermore, the implementations of Transformer and RF on a data set with feature reduction produced even more accurate prediction results. In conclusion, our research shows that the performance of the Transformer model is close to the RF model. Both models produce significantly better prediction results than existing approaches in terms of accuracy

    On the Hidden Waves of Image

    Full text link
    In this paper, we introduce an intriguing phenomenon-the successful reconstruction of images using a set of one-way wave equations with hidden and learnable speeds. Each individual image corresponds to a solution with a unique initial condition, which can be computed from the original image using a visual encoder (e.g., a convolutional neural network). Furthermore, the solution for each image exhibits two noteworthy mathematical properties: (a) it can be decomposed into a collection of special solutions of the same one-way wave equations that are first-order autoregressive, with shared coefficient matrices for autoregression, and (b) the product of these coefficient matrices forms a diagonal matrix with the speeds of the wave equations as its diagonal elements. We term this phenomenon hidden waves, as it reveals that, although the speeds of the set of wave equations and autoregressive coefficient matrices are latent, they are both learnable and shared across images. This represents a mathematical invariance across images, providing a new mathematical perspective to understand images
    corecore