806 research outputs found
Knowledge Matters: Radiology Report Generation with General and Specific Knowledge
Automatic radiology report generation is critical in clinics which can
relieve experienced radiologists from the heavy workload and remind
inexperienced radiologists of misdiagnosis or missed diagnose. Existing
approaches mainly formulate radiology report generation as an image captioning
task and adopt the encoder-decoder framework. However, in the medical domain,
such pure data-driven approaches suffer from the following problems: 1) visual
and textual bias problem; 2) lack of expert knowledge. In this paper, we
propose a knowledge-enhanced radiology report generation approach introduces
two types of medical knowledge: 1) General knowledge, which is input
independent and provides the broad knowledge for report generation; 2) Specific
knowledge, which is input dependent and provides the fine-grained knowledge for
report generation. To fully utilize both the general and specific knowledge, we
also propose a knowledge-enhanced multi-head attention mechanism. By merging
the visual features of the radiology image with general knowledge and specific
knowledge, the proposed model can improve the quality of generated reports.
Experimental results on two publicly available datasets IU-Xray and MIMIC-CXR
show that the proposed knowledge enhanced approach outperforms state-of-the-art
image captioning based methods. Ablation studies also demonstrate that both
general and specific knowledge can help to improve the performance of radiology
report generation.Comment: Medical Image Analysi
DeltaNet:Conditional Medical Report Generation for COVID-19 Diagnosis
Fast screening and diagnosis are critical in COVID-19 patient treatment. In
addition to the gold standard RT-PCR, radiological imaging like X-ray and CT
also works as an important means in patient screening and follow-up. However,
due to the excessive number of patients, writing reports becomes a heavy burden
for radiologists. To reduce the workload of radiologists, we propose DeltaNet
to generate medical reports automatically. Different from typical image
captioning approaches that generate reports with an encoder and a decoder,
DeltaNet applies a conditional generation process. In particular, given a
medical image, DeltaNet employs three steps to generate a report: 1) first
retrieving related medical reports, i.e., the historical reports from the same
or similar patients; 2) then comparing retrieved images and current image to
find the differences; 3) finally generating a new report to accommodate
identified differences based on the conditional report. We evaluate DeltaNet on
a COVID-19 dataset, where DeltaNet outperforms state-of-the-art approaches.
Besides COVID-19, the proposed DeltaNet can be applied to other diseases as
well. We validate its generalization capabilities on the public IU-Xray and
MIMIC-CXR datasets for chest-related diseases. Code is available at
\url{https://github.com/LX-doctorAI1/DeltaNet}
- …