2 research outputs found

    Micro-CT Synthesis and Inner Ear Super Resolution via Generative Adversarial Networks and Bayesian Inference

    Full text link
    Existing medical image super-resolution methods rely on pairs of low- and high- resolution images to learn a mapping in a fully supervised manner. However, such image pairs are often not available in clinical practice. In this paper, we address super-resolution problem in a real-world scenario using unpaired data and synthesize linearly \textbf{eight times} higher resolved Micro-CT images of temporal bone structure, which is embedded in the inner ear. We explore cycle-consistency generative adversarial networks for super-resolution task and equip the translation approach with Bayesian inference. We further introduce \emph{Hu Moment distance} the evaluation metric to quantify the shape of the temporal bone. We evaluate our method on a public inner ear CT dataset and have seen both visual and quantitative improvement over state-of-the-art deep-learning-based methods. In addition, we perform a multi-rater visual evaluation experiment and find that trained experts consistently rate the proposed method the highest quality scores among all methods. Furthermore, we are able to quantify uncertainty in the unpaired translation task and the uncertainty map can provide structural information of the temporal bone.Comment: final version in ISBI 202

    Micro-Ct Synthesis and Inner Ear Super Resolution via Generative Adversarial Networks and Bayesian Inference

    Full text link
    Existing medical image super-resolution methods rely on pairs of low- and high-resolution images to learn a mapping in a fully supervised manner. However, such image pairs are often not available in clinical practice. In this paper, we address super resolution problem in a real-world scenario using unpaired data and synthesize linearly eight times higher resolved Micro-CT images of temporal bone structure embedded in the inner ear. We explore cycle-consistency generative adversarial networks for super-resolution and equip the model with Bayesian inference. We further introduce Hu Moments distance as the evaluation metric to quantify the shape of the temporal bone. We evaluate our method on a public inner ear CT dataset and have seen both visual and quantitative improvement over state-of-the-art supervised deep-learning based methods. Further, we conduct a multi-rater visual evaluation experiment and find that three inner-ear researchers consistently rate our method highest quality scores among three methods. Furthermore, we are able to quantify uncertainty in the unpaired translation task and the uncertainty map can provide structural information of the temporal bone
    corecore