7 research outputs found

    Impaired perceptual learning in a mouse model of Fragile X syndrome is mediated by parvalbumin neuron dysfunction and is reversible.

    Get PDF
    To uncover the circuit-level alterations that underlie atypical sensory processing associated with autism, we adopted a symptom-to-circuit approach in the Fmr1-knockout (Fmr1-/-) mouse model of Fragile X syndrome. Using a go/no-go task and in vivo two-photon calcium imaging, we find that impaired visual discrimination in Fmr1-/- mice correlates with marked deficits in orientation tuning of principal neurons and with a decrease in the activity of parvalbumin interneurons in primary visual cortex. Restoring visually evoked activity in parvalbumin cells in Fmr1-/- mice with a chemogenetic strategy using designer receptors exclusively activated by designer drugs was sufficient to rescue their behavioral performance. Strikingly, human subjects with Fragile X syndrome exhibit impairments in visual discrimination similar to those in Fmr1-/- mice. These results suggest that manipulating inhibition may help sensory processing in Fragile X syndrome

    Domain specific word embeddings for natural language processing in radiology

    No full text
    BackgroundThere has been increasing interest in machine learning based natural language processing (NLP) methods in radiology; however, models have often used word embeddings trained on general web corpora due to lack of a radiology-specific corpus.PurposeWe examined the potential of Radiopaedia to serve as a general radiology corpus to produce radiology specific word embeddings that could be used to enhance performance on a NLP task on radiological text.Materials and methodsEmbeddings of dimension 50, 100, 200, and 300 were trained on articles collected from Radiopaedia using a GloVe algorithm and evaluated on analogy completion. A shallow neural network using input from either our trained embeddings or pre-trained Wikipedia 2014 + Gigaword 5 (WG) embeddings was used to label the Radiopaedia articles. Labeling performance was evaluated based on exact match accuracy and Hamming loss. The McNemar's test with continuity and the Benjamini-Hochberg correction and a 5×2 cross validation paired two-tailed t-test were used to assess statistical significance.ResultsFor accuracy in the analogy task, 50-dimensional (50-D) Radiopaedia embeddings outperformed WG embeddings on tumor origin analogies (p < 0.05) and organ adjectives (p < 0.01) whereas WG embeddings tended to outperform on inflammation location and bone vs. muscle analogies (p < 0.01). The two embeddings had comparable performance on other subcategories. In the labeling task, the Radiopaedia-based model outperformed the WG based model at 50, 100, 200, and 300-D for exact match accuracy (p < 0.001, p < 0.001, p < 0.01, and p < 0.05, respectively) and Hamming loss (p < 0.001, p < 0.001, p < 0.01, and p < 0.05, respectively).ConclusionWe have developed a set of word embeddings from Radiopaedia and shown that they can preserve relevant medical semantics and augment performance on a radiology NLP task. Our results suggest that the cultivation of a radiology-specific corpus can benefit radiology NLP models in the future

    Clinical language search algorithm from free-text: facilitating appropriate imaging.

    No full text
    BackgroundThe comprehensiveness and maintenance of the American College of Radiology (ACR) Appropriateness Criteria (AC) makes it a unique resource for evidence-based clinical imaging decision support, but it is underutilized by clinicians. To facilitate the use of imaging recommendations, we develop a natural language processing (NLP) search algorithm that automatically matches clinical indications that physicians write into imaging orders to appropriate AC imaging recommendations.MethodsWe apply a hybrid model of semantic similarity from a sent2vec model trained on 223 million scientific sentences, combined with term frequency inverse document frequency features. AC documents are ranked based on their embeddings' cosine distance to query. For model testing, we compiled a dataset of simulated simple and complex indications for each AC document (n = 410) and another with clinical indications from randomly sampled radiology reports (n = 100). We compare our algorithm to a custom google search engine.ResultsOn the simulated indications, our algorithm ranked ground truth documents as top 3 for 98% of simple queries and 85% of complex queries. Similarly, on the randomly sampled radiology report dataset, the algorithm ranked 86% of indications with a single match as top 3. Vague and distracting phrases present in the free-text indications were main sources of errors. Our algorithm provides more relevant results than a custom Google search engine, especially for complex queries.ConclusionsWe have developed and evaluated an NLP algorithm that matches clinical indications to appropriate AC guidelines. This approach can be integrated into imaging ordering systems for automated access to guidelines

    Development and web deployment of an automated neuroradiology MRI protocoling tool with natural language processing.

    No full text
    BackgroundA systematic approach to MRI protocol assignment is essential for the efficient delivery of safe patient care. Advances in natural language processing (NLP) allow for the development of accurate automated protocol assignment. We aim to develop, evaluate, and deploy an NLP model that automates protocol assignment, given the clinician indication text.MethodsWe collected 7139 spine MRI protocols (routine or contrast) and 990 head MRI protocols (routine brain, contrast brain, or other) from a single institution. Protocols were split into training (n = 4997 for spine MRI; n = 839 for head MRI), validation (n = 1071 for spine MRI, fivefold cross-validation used for head MRI), and test (n = 1071 for spine MRI; n = 151 for head MRI) sets. fastText and XGBoost were used to develop 2 NLP models to classify spine and head MRI protocols, respectively. A Flask-based web app was developed to be deployed via Heroku.ResultsThe spine MRI model had an accuracy of 83.38% and a receiver operator characteristic area under the curve (ROC-AUC) of 0.8873. The head MRI model had an accuracy of 85.43% with a routine brain protocol ROC-AUC of 0.9463 and contrast brain protocol ROC-AUC of 0.9284. Cancer, infectious, and inflammatory related keywords were associated with contrast administration. Structural anatomic abnormalities and stroke/altered mental status were indicative of routine spine and brain MRI, respectively. Error analysis revealed increasing the sample size may improve performance for head MRI protocols. A web version of the model is provided for demonstration and deployment.ConclusionWe developed and web-deployed two NLP models that accurately predict spine and head MRI protocol assignment, which could improve radiology workflow efficiency

    Application of a Domain-specific BERT for Detection of Speech Recognition Errors in Radiology Reports.

    No full text
    PurposeTo develop radiology domain-specific bidirectional encoder representations from transformers (BERT) models that can identify speech recognition (SR) errors and suggest corrections in radiology reports.Materials and methodsA pretrained BERT model, Clinical BioBERT, was further pretrained on a corpus of 114 008 radiology reports between April 2016 and August 2019 that were retrospectively collected from two hospitals. Next, the model was fine-tuned on a training dataset of generated insertion, deletion, and substitution errors, creating Radiology BERT. This model was retrospectively evaluated on an independent dataset of radiology reports with generated errors (n = 18 885) and on unaltered report sentences (n = 2000) and prospectively evaluated on true clinical SR errors (n = 92). Correction Radiology BERT was separately trained to suggest corrections for detected deletion and substitution errors. Area under the receiver operating characteristic curve (AUC) and bootstrapped 95% CIs were calculated for each evaluation dataset.ResultsRadiology-specific BERT had AUC values of >.99 (95% CI: >0.99, >0.99), 0.94 (95% CI: 0.93, 0.94), 0.98 (95% CI: 0.98, 0.98), and 0.97 (95% CI: 0.97, 0.97) for detecting insertion, deletion, substitution, and all errors, respectively, on the independently generated test set. Testing on unaltered report impressions revealed a sensitivity of 82% (28 of 34; 95% CI: 70%, 93%) and specificity of 88% (1521 of 1728; 95% CI: 87%, 90%). Testing on prospective SR errors showed an accuracy of 75% (69 of 92; 95% CI: 65%, 83%). Finally, the correct word was the top suggestion for 45.6% (475 of 1041; 95% CI: 42.5%, 49.3%) of errors.ConclusionRadiology-specific BERT models fine-tuned on generated errors were able to identify SR errors in radiology reports and suggest corrections.Keywords: Computer Applications, Technology Assessment Supplemental material is available for this article. © RSNA, 2022See also the commentary by Abajian and Cheung in this issue
    corecore