1,589 research outputs found

    Investigation of nedaplatin and CpG oligodeoxynucleotide combination therapy in a mouse model of lung cancer

    Get PDF
    Purpose: To investigate the anti-tumor effects of nedaplatin (NDP) and CpG oligodeoxynucleotide (CpG-ODN) combination therapy in a mouse-modeled lung cancer.Methods: To evaluate the anti-tumor effects of NDP and CpG-ODN combination therapy, a lung cancer xenograft mouse model was established by subcutaneous injection of LA-795 cells. BALB/c mice were divided into four groups as follows: NDP, CpG-, NDP + CpG-ODN and untreated control group. The sections of lung cancer tissue were stained with hematoxylin and eosin (H&E) and morphologically examined. Spleen, body weight, and spleen index were measured. Flow cytometry was used to determine the proportions of CD3+, CD8+, CD4+ and CD4+/CD8+ in mice blood cells. Serum levels of interferon-γ (IFN-γ) and interleukin-12 (IL-12) were measured by enzyme-linked immunosorbent assay (ELISA).Results: NDP + CpG-ODN therapy significantly reduced tumor volume and prolonged the survival time of tumor-bearing mice. NDP + CpG-ODN induced a change in cancer cell morphology, including large areas of necrosis which correlated with a reduction in tumor size. NDP + CpG-ODN significantly increased spleen weight/index and dramatically enhanced immune cell activation. This was evident in the increase serum levels of IFN-γ and IL-12.Conclusion: NDP and CpG-ODN combination therapy inhibits the growth of lung cancer and prolongs the survival time of tumor-bearing mice. This may result from the activation of immune cells and increased expression of IFN-γ and IL-12.Keywords: CpG ODN, NDP, Lung cancer, Combination therap

    SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization

    Full text link
    Transfer learning has fundamentally changed the landscape of natural language processing (NLP) research. Many existing state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model. To address the above issue in a more principled manner, we propose a new computational framework for robust and efficient fine-tuning for pre-trained language models. Specifically, our proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the capacity of the model; 2. Bregman proximal point optimization, which is a class of trust-region methods and can prevent knowledge forgetting. Our experiments demonstrate that our proposed method achieves the state-of-the-art performance on multiple NLP benchmarks.Comment: The 58th annual meeting of the Association for Computational Linguistics (ACL 2020

    Design and Stiffness Analysis of a Bio-inspired Soft Actuator with Bi-direction Tunable Stiffness Property

    Full text link
    The ability to modulate the stiffness of soft actuators plays a vital role in improving the efficiency of interacting with the environment. However, for the unidirectional stiffness modulation mechanism, high lateral stiffness and a wide range of bending stiffness cannot be guaranteed at the same time. Therefore, we draw inspiration from the anatomical structure of the finger, proposing a soft actuator with bi-direction tunable stiffness property (BTSA). BTSA is composed of air-tendon hybrid actuation (ATA) and bone-like structure (BLS). The bending stiffness can be tuned by ATA from 0.2 N/mm to 0.7 N/mm, about a magnification of 3.5 times. The lateral stiffness with BLS is enhanced up to 4.2 times compared to the one without BLS. Meanwhile the lateral stiffness can be tuned decoupling within a certain range of stiffness (e.g. from 0.35 N/mm to 0.46 when the bending angle is 45 deg). The BLS is designed according to a simplified stiffness analysis model. And a lost-wax based fabrication method is proposed to ensure the airtightness. The experiments about fingertip force, bending stiffness, and lateral stiffness are conducted to verify the property

    Deep Network Approximation: Beyond ReLU to Diverse Activation Functions

    Full text link
    This paper explores the expressive power of deep neural networks for a diverse range of activation functions. An activation function set A\mathscr{A} is defined to encompass the majority of commonly used activation functions, such as ReLU\mathtt{ReLU}, LeakyReLU\mathtt{LeakyReLU}, ReLU2\mathtt{ReLU}^2, ELU\mathtt{ELU}, SELU\mathtt{SELU}, Softplus\mathtt{Softplus}, GELU\mathtt{GELU}, SiLU\mathtt{SiLU}, Swish\mathtt{Swish}, Mish\mathtt{Mish}, Sigmoid\mathtt{Sigmoid}, Tanh\mathtt{Tanh}, Arctan\mathtt{Arctan}, Softsign\mathtt{Softsign}, dSiLU\mathtt{dSiLU}, and SRS\mathtt{SRS}. We demonstrate that for any activation function ϱ∈A\varrho\in \mathscr{A}, a ReLU\mathtt{ReLU} network of width NN and depth LL can be approximated to arbitrary precision by a ϱ\varrho-activated network of width 6N6N and depth 2L2L on any bounded set. This finding enables the extension of most approximation results achieved with ReLU\mathtt{ReLU} networks to a wide variety of other activation functions, at the cost of slightly larger constants

    On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network

    Full text link
    This paper explores the expressive power of deep neural networks through the framework of function compositions. We demonstrate that the repeated compositions of a single fixed-size ReLU network exhibit surprising expressive power, despite the limited expressive capabilities of the individual network itself. Specifically, we prove by construction that L2∘g∘r∘L1\mathcal{L}_2\circ \boldsymbol{g}^{\circ r}\circ \boldsymbol{\mathcal{L}}_1 can approximate 11-Lipschitz continuous functions on [0,1]d[0,1]^d with an error O(r−1/d)\mathcal{O}(r^{-1/d}), where g\boldsymbol{g} is realized by a fixed-size ReLU network, L1\boldsymbol{\mathcal{L}}_1 and L2\mathcal{L}_2 are two affine linear maps matching the dimensions, and g∘r\boldsymbol{g}^{\circ r} denotes the rr-times composition of g\boldsymbol{g}. Furthermore, we extend such a result to generic continuous functions on [0,1]d[0,1]^d with the approximation error characterized by the modulus of continuity. Our results reveal that a continuous-depth network generated via a dynamical system has immense approximation power even if its dynamics function is time-independent and realized by a fixed-size ReLU network
    • …
    corecore