147 research outputs found

    Fully-coupled hydro-mechanical analysis of water saturated porous geomaterials under complex loading conditions

    Get PDF
    In this study, we integrate a novel stabilized enhanced strain mixed finite element procedure for poromechanics with an elasto-plastic geomodel to simulate the hydro-mechanical responses of water saturated porous geomaterials such as porous rocks and sands. We present a quantitative analysis on how macroscopic plastic response affects the seepage of pore fluid, and vice versa. We are particular interested in hydromechanical coupling effects on the shear failure behavior of the porous geomaterials as well as its potential regularization effects on pathological mesh dependence. Finite element simulations of shear failure problems of water-saturated porous geomaterials will be presented to study the effect of pore pressure diffusion on the stress path and plastic response of the porous geomaterials

    Modeling the hydro-mechanical responses of strip and circular punch loadings on water-saturated collapsible geomaterials

    Get PDF
    A stabilized enhanced strain finite element procedure for poromechanics is fully integrated with an elasto-plastic cap model to simulate the hydro-mechanical interactions of fluid-infiltrating porous rocks with associative and non-associative plastic flow. We present a quantitative analysis on how macroscopic plastic volumetric response caused by pore collapse and grain rearrangement affects the seepage of pore fluid, and vice versa. Results of finite element simulations imply that the dissipation of excess pore pressure may significantly affect the stress path and thus alter the volumetric plastic responses

    Glycyrrhizin could reduce ocular hypertension induced by triamcinolone acetonide in rabbits

    Get PDF
    Purpose: To evaluate the hypotensive effects of glycyrrhizin (GL) on a rabbit model of ocular hypertension (OH) induced by triamcinolone acetonide (TA). Methods: Forty New Zealand White Rabbits were divided as follows: control (intravitreal injection of sterile saline solution); GL (intravitreal injection of sterile saline solution, then fed with 25mg GL/day); TA (intravitreal TA injection); TA+GL (intravitreal TA injection, then fed with GL) and GL+TA (pre-treated with GL for 3 days, then got TA injection and the following GL treatment). Intraocular pressure (IOP), flash electroretinogram (flash ERG) and flash visual evoked potential (flash VEP) were measured during the follow-up (28 days). The aqueous humor was analyzed, using (1)H-nuclear magnetic resonance spectroscopy and principal components analysis (PCA). Results: IOP elevation was observed in the TA group during the follow-up, compared to the controls (p<0.01). The IOP was decreased in the TA+GL group and the GL+TA group, compared to the TA group (p<0.05). Both in flash ERG and VEP, the amplitudes were decreased, and the implicit time was prolonged in the TA group, compared to the controls (p<0.05); and the parameters were improved after intervention of GL, compared to the TA group (p<0.05). PCA results indicated that TA could affect ocular metabolism (especially the sugar metabolism), and GL could inhibit it. Conclusions: The administration of GL could suppress OH induced by TA in rabbits, and improve their electrophysiological parameters. Metabolomics is a useful tool in ophthalmology research. Our results indicate that TA-induced ocular metabolism changes could be compensated by GL.Biochemistry & Molecular BiologyOphthalmologySCI(E)6ARTICLE2242056-20641

    Boosting Language Models Reasoning with Chain-of-Knowledge Prompting

    Full text link
    Recently, Chain-of-Thought (CoT) prompting has delivered success on complex reasoning tasks, which aims at designing a simple prompt like ``Let's think step by step'' or multiple in-context exemplars with well-designed rationales to elicit Large Language Models (LLMs) to generate intermediate reasoning steps. However, the generated rationales often come with mistakes, making unfactual and unfaithful reasoning chains. To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple. This is inspired by our human behaviors, i.e., we can draw a mind map or knowledge map as the reasoning evidence in the brain before answering a complex question. Benefiting from CoK, we additionally introduce a F^2-Verification method to estimate the reliability of the reasoning chains in terms of factuality and faithfulness. For the unreliable response, the wrong evidence can be indicated to prompt the LLM to rethink. Extensive experiments demonstrate that our method can further improve the performance of commonsense, factual, symbolic, and arithmetic reasoning tasks.Comment: Work in progres

    TransCoder: Towards Unified Transferable Code Representation Learning Inspired by Human Skills

    Full text link
    Code pre-trained models (CodePTMs) have recently demonstrated a solid capacity to process various software intelligence tasks, e.g., code clone detection, code translation, and code summarization. The current mainstream method that deploys these models to downstream tasks is to fine-tune them on individual tasks, which is generally costly and needs sufficient data for large models. To tackle the issue, in this paper, we present TransCoder, a unified Transferable fine-tuning strategy for Code representation learning. Inspired by human inherent skills of knowledge generalization, TransCoder drives the model to learn better code-related meta-knowledge like human programmers. Specifically, we employ a tunable prefix encoder as the meta-learner to capture cross-task and cross-language transferable knowledge, respectively. Besides, tasks with minor training sample sizes and languages with small corpus can be remarkably benefited from our approach. Extensive experiments conducted on benchmark datasets clearly demonstrate that our method can lead to superior performance on various code-related tasks and encourage mutual reinforcement. We also show that TransCoder is applicable in low-resource scenarios.Comment: work in progres

    CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure

    Full text link
    Code pre-trained models (CodePTMs) have recently demonstrated significant success in code intelligence. To interpret these models, some probing methods have been applied. However, these methods fail to consider the inherent characteristics of codes. In this paper, to address the problem, we propose a novel probing method CAT-probing to quantitatively interpret how CodePTMs attend code structure. We first denoise the input code sequences based on the token types pre-defined by the compilers to filter those tokens whose attention scores are too small. After that, we define a new metric CAT-score to measure the commonality between the token-level attention scores generated in CodePTMs and the pair-wise distances between corresponding AST nodes. The higher the CAT-score, the stronger the ability of CodePTMs to capture code structure. We conduct extensive experiments to integrate CAT-probing with representative CodePTMs for different programming languages. Experimental results show the effectiveness of CAT-probing in CodePTM interpretation. Our codes and data are publicly available at https://github.com/nchen909/CodeAttention.Comment: Accepted by EMNLP 202

    Do Large Language Models Know What They Don't Know?

    Full text link
    Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks. Current research focuses on enhancing their performance within their existing knowledge. Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend. Therefore, the ability to understand their own limitations on the unknows, referred to as self-knowledge, is of paramount importance. This study aims to evaluate LLMs' self-knowledge by assessing their ability to identify unanswerable or unknowable questions. We introduce an automated methodology to detect uncertainty in the responses of these models, providing a novel measure of their self-knowledge. We further introduce a unique dataset, SelfAware, consisting of unanswerable questions from five diverse categories and their answerable counterparts. Our extensive analysis, involving 20 LLMs including GPT-3, InstructGPT, and LLaMA, discovering an intrinsic capacity for self-knowledge within these models. Moreover, we demonstrate that in-context learning and instruction tuning can further enhance this self-knowledge. Despite this promising insight, our findings also highlight a considerable gap between the capabilities of these models and human proficiency in recognizing the limits of their knowledge.Comment: 10 pages, 9 figures, accepted by Findings of ACL202

    Exchanging-based Multimodal Fusion with Transformer

    Full text link
    We study the problem of multimodal fusion in this paper. Recent exchanging-based methods have been proposed for vision-vision fusion, which aim to exchange embeddings learned from one modality to the other. However, most of them project inputs of multimodalities into different low-dimensional spaces and cannot be applied to the sequential input data. To solve these issues, in this paper, we propose a novel exchanging-based multimodal fusion model MuSE for text-vision fusion based on Transformer. We first use two encoders to separately map multimodal inputs into different low-dimensional spaces. Then we employ two decoders to regularize the embeddings and pull them into the same space. The two decoders capture the correlations between texts and images with the image captioning task and the text-to-image generation task, respectively. Further, based on the regularized embeddings, we present CrossTransformer, which uses two Transformer encoders with shared parameters as the backbone model to exchange knowledge between multimodalities. Specifically, CrossTransformer first learns the global contextual information of the inputs in the shallow layers. After that, it performs inter-modal exchange by selecting a proportion of tokens in one modality and replacing their embeddings with the average of embeddings in the other modality. We conduct extensive experiments to evaluate the performance of MuSE on the Multimodal Named Entity Recognition task and the Multimodal Sentiment Analysis task. Our results show the superiority of MuSE against other competitors. Our code and data are provided at https://github.com/RecklessRonan/MuSE

    Tea polyphenols induced apoptosis of breast cancer cells by suppressing the expression of Survivin

    Get PDF
    To study the mechanism of tea polyphenols (TP)-induced apoptosis of breast cancer cells. Proliferation of MCF-7 and SK-BR-3 cells was evaluated by MTT assays. Cellular ultrastructure was examined by electron microscopy. Apoptosis was detected by TUNEL. PCNA, Cyclin D1, Cyclin E and Survivin expression was measured by Western blot. Cell proliferation was significantly inhibited by TP. Spindle and round cells were loosely distributed with increased particles after TP treatment. Increased cell size, frequent nuclear atypia and a collapse of apoptosis were observed. The nucleus was pushed towards one side, while the cytoplasm was rich in free ribosome. The membrane of mitochondria was thickening, and the cell apoptotic body was observed. TP treated cells experienced significantly enhanced apoptosis compared with 5-Fu treated or control groups. The expression of survivin was downregulated by TP. To conclude, TP can inhibit cell growth and induce apoptosis through downregulating the expression of survivin in breast cancer
    corecore