18 research outputs found
Simultaneous super-resolution and cross-modality synthesis of 3D medical images using weakly-supervised joint convolutional sparse coding
Magnetic Resonance Imaging (MRI) offers high-resolution in vivo imaging and rich functional and anatomical multimodality tissue contrast. In practice, however, there are challenges associated with considerations of scanning costs, patient comfort, and scanning time that constrain how much data can be acquired in clinical or research studies. In this paper, we explore the possibility of generating high-resolution and multimodal images from low-resolution single-modality imagery. We propose the weakly-supervised joint convolutional sparse coding to simultaneously solve the problems of super-resolution (SR) and cross-modality image synthesis. The learning process requires only a few registered multimodal image pairs as the training set. Additionally, the quality of the joint dictionary learning can be improved using a larger set of unpaired images1. To combine unpaired data from different image resolutions/modalities, a hetero-domain image alignment term is proposed. Local image neighborhoods are naturally preserved by operating on the whole image domain (as opposed to image patches) and using joint convolutional sparse coding. The paired images are enhanced in the joint learning process with unpaired data and an additional maximum mean discrepancy term, which minimizes the dissimilarity between their feature distributions. Experiments show that the proposed method outperforms state-of-the-art techniques on both SR reconstruction and simultaneous SR and cross-modality synthesis
Representation Learning via Cauchy Convolutional Sparse Coding
In representation learning, Convolutional Sparse Coding (CSC) enables
unsupervised learning of features by jointly optimising both an -norm
fidelity term and a sparsity enforcing penalty. This work investigates using a
regularisation term derived from an assumed Cauchy prior for the coefficients
of the feature maps of a CSC generative model. The sparsity penalty term
resulting from this prior is solved via its proximal operator, which is then
applied iteratively, element-wise, on the coefficients of the feature maps to
optimise the CSC cost function. The performance of the proposed Iterative
Cauchy Thresholding (ICT) algorithm in reconstructing natural images is
compared against the common choice of -norm optimised via soft and
hard thresholding. ICT outperforms IHT and IST in most of these reconstruction
experiments across various datasets, with an average PSNR of up to 11.30 and
7.04 above ISTA and IHT respectively.Comment: 19 pages, 9 figures, journal draf
Representation Learning via Cauchy Convolutional Sparse Coding
In representation learning, Convolutional Sparse Coding (CSC) enables
unsupervised learning of features by jointly optimising both an -norm
fidelity term and a sparsity enforcing penalty. This work investigates using a
regularisation term derived from an assumed Cauchy prior for the coefficients
of the feature maps of a CSC generative model. The sparsity penalty term
resulting from this prior is solved via its proximal operator, which is then
applied iteratively, element-wise, on the coefficients of the feature maps to
optimise the CSC cost function. The performance of the proposed Iterative
Cauchy Thresholding (ICT) algorithm in reconstructing natural images is
compared against the common choice of -norm optimised via soft and
hard thresholding. ICT outperforms IHT and IST in most of these reconstruction
experiments across various datasets, with an average PSNR of up to 11.30 and
7.04 above ISTA and IHT respectively.Comment: 19 pages, 9 figures, journal draf