214 research outputs found

    In-Context Learning Functions with Varying Number of Minima

    Full text link
    Large Language Models (LLMs) have proven effective at In-Context Learning (ICL), an ability that allows them to create predictors from labeled examples. Few studies have explored the interplay between ICL and specific properties of functions it attempts to approximate. In our study, we use a formal framework to explore ICL and propose a new task of approximating functions with varying number of minima. We implement a method that allows for producing functions with given inputs as minima. We find that increasing the number of minima degrades ICL performance. At the same time, our evaluation shows that ICL outperforms 2-layer Neural Network (2NN) model. Furthermore, ICL learns faster than 2NN in all settings. We validate the findings through a set of few-shot experiments across various hyperparameter configurations

    Assertion Detection Large Language Model In-context Learning LoRA Fine-tuning

    Full text link
    In this study, we aim to address the task of assertion detection when extracting medical concepts from clinical notes, a key process in clinical natural language processing (NLP). Assertion detection in clinical NLP usually involves identifying assertion types for medical concepts in the clinical text, namely certainty (whether the medical concept is positive, negated, possible, or hypothetical), temporality (whether the medical concept is for present or the past history), and experiencer (whether the medical concept is described for the patient or a family member). These assertion types are essential for healthcare professionals to quickly and clearly understand the context of medical conditions from unstructured clinical texts, directly influencing the quality and outcomes of patient care. Although widely used, traditional methods, particularly rule-based NLP systems and machine learning or deep learning models, demand intensive manual efforts to create patterns and tend to overlook less common assertion types, leading to an incomplete understanding of the context. To address this challenge, our research introduces a novel methodology that utilizes Large Language Models (LLMs) pre-trained on a vast array of medical data for assertion detection. We enhanced the current method with advanced reasoning techniques, including Tree of Thought (ToT), Chain of Thought (CoT), and Self-Consistency (SC), and refine it further with Low-Rank Adaptation (LoRA) fine-tuning. We first evaluated the model on the i2b2 2010 assertion dataset. Our method achieved a micro-averaged F-1 of 0.89, with 0.11 improvements over the previous works. To further assess the generalizability of our approach, we extended our evaluation to a local dataset that focused on sleep concept extraction. Our approach achieved an F-1 of 0.74, which is 0.31 higher than the previous method

    3D meshless FEM-BEM model for prediction of sound fields in cabins due to external sound disturbances

    Get PDF
    The Finite Element Method (FEM) and Boundary Element Method (BEM) are widely applied to predict the sound pressure level (SPL) in enclosed spaces for low frequency problems. However, a single method usually cannot fulfill the task for predicting the internal SPL in enclosures including objects in the interior due to external disturbances. Moreover, these methods have some disadvantages such as complex pre-processing, time-consuming and inevitable pollution effects. Based on these drawbacks, this paper attempts to combine the Meshless Method (MM), acoustical FEM and BEM into a hybrid method which can be applied to predict the SPL in an enclosed environment with external sound sources. Firstly, the hybrid theory for the acoustic problem and its implementation are illustrated. Next, numerical simulations and experiments are conducted to validate the peak value, SPL and computing efficiency using this method. Comparative results obtained from the proposed method, FEM and BEM using SYSNOISE are shown to be in agreement, and the proposed method is more efficient. Experimental results show that the average relative error of SPL in each location is less than 5.26 %. It is corroborated that the proposed method is applicable to the prediction of the internal SPL with the case of exterior sound sources existed

    Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks

    Full text link
    Clinical Natural Language Processing (NLP) has become an emerging technology in healthcare that leverages a large amount of free-text data in electronic health records (EHRs) to improve patient care, support clinical decisions, and facilitate clinical and translational science research. Recently, deep learning has achieved state-of-the-art performance in many clinical NLP tasks. However, training deep learning models usually requires large annotated datasets, which are normally not publicly available and can be time-consuming to build in clinical domains. Working with smaller annotated datasets is typical in clinical NLP and therefore, ensuring that deep learning models perform well is crucial for the models to be used in real-world applications. A widely adopted approach is fine-tuning existing Pre-trained Language Models (PLMs), but these attempts fall short when the training dataset contains only a few annotated samples. Few-Shot Learning (FSL) has recently been investigated to tackle this problem. Siamese Neural Network (SNN) has been widely utilized as an FSL approach in computer vision, but has not been studied well in NLP. Furthermore, the literature on its applications in clinical domains is scarce. In this paper, we propose two SNN-based FSL approaches for clinical NLP, including Pre-Trained SNN (PT-SNN) and SNN with Second-Order Embeddings (SOE-SNN). We evaluated the proposed approaches on two clinical tasks, namely clinical text classification and clinical named entity recognition. We tested three few-shot settings including 4-shot, 8-shot, and 16-shot learning. Both clinical NLP tasks were benchmarked using three PLMs, including BERT,BioBERT, and BioClinicalBERT. The experimental results verified the effectiveness of the proposed SNN-based FSL approaches in both NLP tasks

    Induction of release and up-regulated gene expression of interleukin (IL)-8 in A549 cells by serine proteinases

    Get PDF
    BACKGROUND: Hypersecretion of cytokines and serine proteinases has been observed in asthma. Since protease-activated receptors (PARs) are receptors of several serine proteinases and airway epithelial cells are a major source of cytokines, the influence of serine proteinases and PARs on interleukin (IL)-8 secretion and gene expression in cultured A549 cells was examined. RESULTS: A549 cells express all four PARs at both protein and mRNA levels as assessed by flow cytometry, immunofluorescence microscopy and reverse transcription polymerase chain reaction (PCR). Thrombin, tryptase, elastase and trypsin induce a up to 8, 4.3, 4.4 and 5.1 fold increase in IL-8 release from A549 cells, respectively following 16 h incubation period. The thrombin, elastase and trypsin induced secretion of IL-8 can be abolished by their specific inhibitors. Agonist peptides of PAR-1, PAR-2 and PAR-4 stimulate up to 15.6, 6.6 and 3.5 fold increase in IL-8 secretion, respectively. Real time PCR shows that IL-8 mRNA is up-regulated by the serine proteinases tested and by agonist peptides of PAR-1 and PAR-2. CONCLUSION: The proteinases, possibly through activation of PARs can stimulate IL-8 release from A549 cells, suggesting that they are likely to contribute to IL-8 related airway inflammatory disorders in man
    • …
    corecore