761,487 research outputs found
CT Image Reconstruction by Spatial-Radon Domain Data-Driven Tight Frame Regularization
This paper proposes a spatial-Radon domain CT image reconstruction model
based on data-driven tight frames (SRD-DDTF). The proposed SRD-DDTF model
combines the idea of joint image and Radon domain inpainting model of
\cite{Dong2013X} and that of the data-driven tight frames for image denoising
\cite{cai2014data}. It is different from existing models in that both CT image
and its corresponding high quality projection image are reconstructed
simultaneously using sparsity priors by tight frames that are adaptively
learned from the data to provide optimal sparse approximations. An alternative
minimization algorithm is designed to solve the proposed model which is
nonsmooth and nonconvex. Convergence analysis of the algorithm is provided.
Numerical experiments showed that the SRD-DDTF model is superior to the model
by \cite{Dong2013X} especially in recovering some subtle structures in the
images
Groupwise Multimodal Image Registration using Joint Total Variation
In medical imaging it is common practice to acquire a wide range of
modalities (MRI, CT, PET, etc.), to highlight different structures or
pathologies. As patient movement between scans or scanning session is
unavoidable, registration is often an essential step before any subsequent
image analysis. In this paper, we introduce a cost function based on joint
total variation for such multimodal image registration. This cost function has
the advantage of enabling principled, groupwise alignment of multiple images,
whilst being insensitive to strong intensity non-uniformities. We evaluate our
algorithm on rigidly aligning both simulated and real 3D brain scans. This
validation shows robustness to strong intensity non-uniformities and low
registration errors for CT/PET to MRI alignment. Our implementation is publicly
available at https://github.com/brudfors/coregistration-njtv
Effects of deleting cannabinoid receptor-2 on mechanical and material properties of cortical and trabecular bone
Acknowledgements We thank Dr J.S. Gregory for assistance with Image J and Mr K. Mackenzie for assistance with Micro-CT analysis. Funding ABK was funded by a University of Aberdeen, Institute of Medical Sciences studentship and the Overseas Research Students Awards Scheme.Peer reviewedPublisher PD
Cats or CAT scans: transfer learning from natural or medical image source datasets?
Transfer learning is a widely used strategy in medical image analysis.
Instead of only training a network with a limited amount of data from the
target task of interest, we can first train the network with other, potentially
larger source datasets, creating a more robust model. The source datasets do
not have to be related to the target task. For a classification task in lung CT
images, we could use both head CT images, or images of cats, as the source.
While head CT images appear more similar to lung CT images, the number and
diversity of cat images might lead to a better model overall. In this survey we
review a number of papers that have performed similar comparisons. Although the
answer to which strategy is best seems to be "it depends", we discuss a number
of research directions we need to take as a community, to gain more
understanding of this topic.Comment: Accepted to Current Opinion in Biomedical Engineerin
- …
