394 research outputs found
Semi-Supervised Self-Taught Deep Learning for Finger Bones Segmentation
Segmentation stands at the forefront of many high-level vision tasks. In this
study, we focus on segmenting finger bones within a newly introduced
semi-supervised self-taught deep learning framework which consists of a student
network and a stand-alone teacher module. The whole system is boosted in a
life-long learning manner wherein each step the teacher module provides a
refinement for the student network to learn with newly unlabeled data.
Experimental results demonstrate the superiority of the proposed method over
conventional supervised deep learning methods.Comment: IEEE BHI 2019 accepte
Towards Generalist Foundation Model for Radiology
In this study, we aim to initiate the development of Radiology Foundation
Model, termed as RadFM.We consider the construction of foundational models from
the perspectives of data, model design, and evaluation thoroughly. Our
contribution can be concluded as follows: (i), we construct a large-scale
Medical Multi-modal Dataset, MedMD, consisting of 16M 2D and 3D medical scans.
To the best of our knowledge, this is the first multi-modal dataset containing
3D medical scans. (ii), We propose an architecture that enables visually
conditioned generative pre-training, allowing for the integration of text input
interleaved with 2D or 3D medical scans to generate response for diverse
radiologic tasks. The model was initially pre-trained on MedMD and subsequently
domain-specific fine-tuned on RadMD, a radiologic cleaned version of MedMD,
containing 3M radiologic visual-language pairs. (iii), we propose a new
evaluation benchmark that comprises five tasks, aiming to comprehensively
assess the capability of foundation models in handling practical clinical
problems. Our experimental results confirm that RadFM significantly outperforms
existing multi-modal foundation models. The codes, data, and model checkpoint
will all be made publicly available to promote further research and development
in the field
Knowledge-enhanced Visual-Language Pre-training on Chest Radiology Images
While multi-modal foundation models pre-trained on large-scale data have been
successful in natural language understanding and vision recognition, their use
in medical domains is still limited due to the fine-grained nature of medical
tasks and the high demand for domain knowledge. To address this challenge, we
propose a novel approach called Knowledge-enhanced Auto Diagnosis (KAD) which
leverages existing medical domain knowledge to guide vision-language
pre-training using paired chest X-rays and radiology reports. We evaluate KAD
on {four} external X-ray datasets and demonstrate that its zero-shot
performance is not only comparable to that of fully-supervised models, but also
superior to the average of three expert radiologists for three (out of five)
pathologies with statistical significance. Moreover, when few-shot annotation
is available, KAD outperforms all existing approaches in fine-tuning settings,
demonstrating its potential for application in different clinical scenarios
MedKLIP: Medical Knowledge Enhanced Language-Image Pre-Training in Radiology
In this paper, we consider enhancing medical visual-language pre-training
(VLP) with domain-specific knowledge, by exploiting the paired image-text
reports from the radiological daily practice. In particular, we make the
following contributions: First, unlike existing works that directly process the
raw reports, we adopt a novel triplet extraction module to extract the
medical-related information, avoiding unnecessary complexity from language
grammar and enhancing the supervision signals; Second, we propose a novel
triplet encoding module with entity translation by querying a knowledge base,
to exploit the rich domain knowledge in medical field, and implicitly build
relationships between medical entities in the language embedding space; Third,
we propose to use a Transformer-based fusion model for spatially aligning the
entity description with visual signals at the image patch level, enabling the
ability for medical diagnosis; Fourth, we conduct thorough experiments to
validate the effectiveness of our architecture, and benchmark on numerous
public benchmarks, e.g., ChestX-ray14, RSNA Pneumonia, SIIM-ACR Pneumothorax,
COVIDx CXR-2, COVID Rural, and EdemaSeverity. In both zero-shot and fine-tuning
settings, our model has demonstrated strong performance compared with the
former methods on disease classification and grounding
Osthole induces G2/M arrest and apoptosis in lung cancer A549 cells by modulating PI3K/Akt pathway
<p>Abstract</p> <p>Background</p> <p>To explore the effects of Osthole on the proliferation, cell cycle and apoptosis of human lung cancer A549 cells.</p> <p>Methods</p> <p>Human lung cancer A549 cells were treated with Osthole at different concentrations. Cell proliferation was measured using the MTT assay. Cell cycle was evaluated using DNA flow cytometry analysis. Induction of apoptosis was determined by flow cytometry and fluorescent microscopy. The expressions of Cyclin B1, p-Cdc2, Bcl-2, Bax, t-Akt and p-Akt were evaluated by Western blotting.</p> <p>Results</p> <p>Osthole inhibited the growth of human lung cancer A549 cells by inducing G2/M arrest and apoptosis. Western blotting demonstrated that Osthole down-regulated the expressions of Cyclin B1, p-Cdc2 and Bcl-2 and up-regulated the expressions of Bax in A549 cells. Inhibition of PI3K/Akt signaling pathway was also observed after treating A549 cells with Osthole.</p> <p>Conclusions</p> <p>Our findings suggest that Osthole may have a therapeutic application in the treatment of human lung cancer.</p
- …