2 research outputs found
Generation of Virtual Dual Energy Images from Standard Single-Shot Radiographs using Multi-scale and Conditional Adversarial Network
Dual-energy (DE) chest radiographs provide greater diagnostic information
than standard radiographs by separating the image into bone and soft tissue,
revealing suspicious lesions which may otherwise be obstructed from view.
However, acquisition of DE images requires two physical scans, necessitating
specialized hardware and processing, and images are prone to motion artifact.
Generation of virtual DE images from standard, single-shot chest radiographs
would expand the diagnostic value of standard radiographs without changing the
acquisition procedure. We present a Multi-scale Conditional Adversarial Network
(MCA-Net) which produces high-resolution virtual DE bone images from standard,
single-shot chest radiographs. Our proposed MCA-Net is trained using the
adversarial network so that it learns sharp details for the production of
high-quality bone images. Then, the virtual DE soft tissue image is generated
by processing the standard radiograph with the virtual bone image using a cross
projection transformation. Experimental results from 210 patient DE chest
radiographs demonstrated that the algorithm can produce high-quality virtual DE
chest radiographs. Important structures were preserved, such as coronary
calcium in bone images and lung lesions in soft tissue images. The average
structure similarity index and the peak signal to noise ratio of the produced
bone images in testing data were 96.4 and 41.5, which are significantly better
than results from previous methods. Furthermore, our clinical evaluation
results performed on the publicly available dataset indicates the clinical
values of our algorithms. Thus, our algorithm can produce high-quality DE
images that are potentially useful for radiologists, computer-aided
diagnostics, and other diagnostic tasks.Comment: 16 pages, 7 figures, accepted by Asian Conference on Computer Vision
(2018 ACCV
Encoding CT Anatomy Knowledge for Unpaired Chest X-ray Image Decomposition
Although chest X-ray (CXR) offers a 2D projection with overlapped anatomies,
it is widely used for clinical diagnosis. There is clinical evidence supporting
that decomposing an X-ray image into different components (e.g., bone, lung and
soft tissue) improves diagnostic value. We hereby propose a decomposition
generative adversarial network (DecGAN) to anatomically decompose a CXR image
but with unpaired data. We leverage the anatomy knowledge embedded in CT, which
features a 3D volume with clearly visible anatomies. Our key idea is to embed
CT priori decomposition knowledge into the latent space of unpaired CXR
autoencoder. Specifically, we train DecGAN with a decomposition loss,
adversarial losses, cycle-consistency losses and a mask loss to guarantee that
the decomposed results of the latent space preserve realistic body structures.
Extensive experiments demonstrate that DecGAN provides superior unsupervised
CXR bone suppression results and the feasibility of modulating CXR components
by latent space disentanglement. Furthermore, we illustrate the diagnostic
value of DecGAN and demonstrate that it outperforms the state-of-the-art
approaches in terms of predicting 11 out of 14 common lung diseases.Comment: 9 pages with 4 figure