74 research outputs found

    Challenges in Diagnosing Asthma in Children

    Get PDF
    What you need to know- Asthma in children is a clinical diagnosis based on history and examination, and in many cases a response to a trial of inhaled corticosteroid treatment- Asthma can be diagnosed in children under 5, but is unlikely to explain recurrent respiratory symptoms in children under 2- Tests can be done to help support (or exclude) a clinical diagnosis but should not be used solely to make (or exclude) a diagnosis of asthm

    Ethnic, racial and migrant inequalities in respiratory health

    Get PDF
    Disparities in the incidence, prevalence, morbidity and mortality rates of many respiratory diseases are evident between ethnic groups. Biological, cultural, and environmental factors related to ethnicity can all contribute to the differences in respiratory health observed between ethnic minority groups, but inequalities observed are most commonly due to lower socio-economic status. People who migrate within a country or across an international border may experience an improvement in respiratory health associated with improvements in socioeconomic status. However, migrants may also experience worse health outcomes in destination countries, as they are faced by barriers in language and culture, discrimination, exclusion, and limited access to health services. Whilst some high quality studies investigating ethnicity and respiratory health are available, further research into ethnic differences is needed. Improving the recording of ethnicity in health records, addressing barriers to accessing respiratory health care and improving cultural literacy more generally are some of the ways that inequalities can be tackled

    Parameter-Efficient Fine-Tuning of LLaMA for the Clinical Domain

    Full text link
    Adapting pretrained language models to novel domains, such as clinical applications, traditionally involves retraining their entire set of parameters. However, this approach is increasingly proven to be impractical owing to the substantial computational requirements associated with training such large language models. To address this issue, Parameter-Efficient Fine-Tuning (PEFT) techniques offer a viable solution by selectively fine-tuning a small subset of additional parameters, significantly reducing the computational requirements for domain adaptation. In this study, we propose Clinical LLaMA-LoRA, a PEFT adapter layer built upon the open-sourced LLaMA model. Clinical LLaMA-LoRA is trained using clinical notes obtained from the MIMIC-IV database, thereby creating a specialised adapter designed for the clinical domain. Additionally, we propose a two-step PEFT framework which fuses Clinical LLaMA-LoRA with Downstream LLaMA-LoRA, another PEFT adapter specialised for downstream tasks. We evaluate this framework on multiple clinical outcome prediction datasets, comparing it to clinically trained language models. Our proposed framework achieves a state-of-the-art AUROC score averaged across all clinical downstream tasks. We observe substantial improvements of 6-9% AUROC score in the large-scale multilabel classification tasks, such as diagnoses and procedures classification

    Parameter-efficient fine-tuning of LLaMA for the clinical domain

    Get PDF
    Adapting pretrained language models to novel domains, such as clinical applications, traditionally involves retraining their entire set of parameters. However, this approach is increasingly proven to be impractical owing to the substantial computational requirements associated with training such large language models. To address this issue, Parameter-Efficient Fine-Tuning (PEFT) techniques offer a viable solution by selectively fine-tuning a small subset of additional parameters, significantly reducing the computational requirements for domain adaptation. In this study, we propose Clinical LLaMA-LoRA, a PEFT adapter layer built upon the open-sourced LLaMA model. Clinical LLaMA-LoRA is trained using clinical notes obtained from the MIMIC-IV database, thereby creating a specialised adapter designed for the clinical domain. Additionally, we propose a two-step PEFT framework which fuses Clinical LLaMA-LoRA with Downstream LLaMA-LoRA, another PEFT adapter specialised for downstream tasks. We evaluate this framework on multiple clinical outcome prediction datasets, comparing it to clinically trained language models. Our proposed framework achieves a state-of-the-art AUROC score averaged across all clinical downstream tasks. We observe substantial improvements of 6-9% AUROC score in the large-scale multilabel classification tasks, such as diagnoses and procedures classification

    Edinburgh Clinical NLP at SemEval-2024 Task 2:Fine-tune your model unless you have access to GPT-4

    Get PDF
    The NLI4CT task assesses Natural Language Inference systems in predicting whether hypotheses entail or contradict evidence from Clinical Trial Reports. In this study, we evaluate various Large Language Models (LLMs) with multiple strategies, including Chain-of-Thought, In-Context Learning, and Parameter-Efficient Fine-Tuning (PEFT). We propose a PEFT method to improve the consistency of LLMs by merging adapters that were fine-tuned separately using triplet and language modelling objectives. We found that merging the two PEFT adapters improves the F1 score (+0.0346) and consistency (+0.152) of the LLMs. However, our novel methods did not produce more accurate results than GPT-4 in terms of faithfulness and consistency. Averaging the three metrics, GPT-4 ranks joint-first in the competition with 0.8328. Finally, our contamination analysis with GPT-4 indicates that there was no test data leakage
    • …
    corecore