292 research outputs found
A Review on Deep Learning in Medical Image Reconstruction
Medical imaging is crucial in modern clinics to guide the diagnosis and
treatment of diseases. Medical image reconstruction is one of the most
fundamental and important components of medical imaging, whose major objective
is to acquire high-quality medical images for clinical usage at the minimal
cost and risk to the patients. Mathematical models in medical image
reconstruction or, more generally, image restoration in computer vision, have
been playing a prominent role. Earlier mathematical models are mostly designed
by human knowledge or hypothesis on the image to be reconstructed, and we shall
call these models handcrafted models. Later, handcrafted plus data-driven
modeling started to emerge which still mostly relies on human designs, while
part of the model is learned from the observed data. More recently, as more
data and computation resources are made available, deep learning based models
(or deep models) pushed the data-driven modeling to the extreme where the
models are mostly based on learning with minimal human designs. Both
handcrafted and data-driven modeling have their own advantages and
disadvantages. One of the major research trends in medical imaging is to
combine handcrafted modeling with deep modeling so that we can enjoy benefits
from both approaches. The major part of this article is to provide a conceptual
review of some recent works on deep modeling from the unrolling dynamics
viewpoint. This viewpoint stimulates new designs of neural network
architectures with inspirations from optimization algorithms and numerical
differential equations. Given the popularity of deep modeling, there are still
vast remaining challenges in the field, as well as opportunities which we shall
discuss at the end of this article.Comment: 31 pages, 6 figures. Survey pape
Weighted structure tensor total variation for image denoising
Based on the variational framework of the image denoising problem, we
introduce a novel image denoising regularizer that combines anisotropic total
variation model (ATV) and structure tensor total variation model (STV) in this
paper. The model can effectively capture the first-order information of the
image and maintain local features during the denoising process by applying the
matrix weighting operator proposed in the ATV model to the patch-based Jacobian
matrix in the STV model. Denoising experiments on grayscale and RGB color
images demonstrate that the suggested model can produce better restoration
quality in comparison to other well-known methods based on
total-variation-based models and the STV model
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Robust Low-Dose CT Perfusion Deconvolution via Tensor Total-Variation Regularization
Acute brain diseases such as acute strokes and transit ischemic attacks are the leading causes of mortality and morbidity worldwide, responsible for 9% of total death every year. Time is brain is a widely accepted concept in acute cerebrovascular disease treatment. Efficient and accurate computational framework for hemodynamic parameters estimation can save critical time for thrombolytic therapy. Meanwhile the high level of accumulated radiation dosage due to continuous image acquisition in CT perfusion (CTP) raised concerns on patient safety and public health. However, low-radiation leads to increased noise and artifacts which require more sophisticated and time-consuming algorithms for robust estimation. In this paper, we focus on developing a robust and efficient framework to accurately estimate the perfusion parameters at low radiation dosage. Specifically, we present a tensor total-variation (TTV) technique which fuses the spatial correlation of the vascular structure and the temporal continuation of the blood signal flow. An efficient algorithm is proposed to find the solution with fast convergence and reduced computational complexity. Extensive evaluations are carried out in terms of sensitivity to noise levels, estimation accuracy, contrast preservation, and performed on digital perfusion phantom estimation, as well as in vivo clinical subjects. Our framework reduces the necessary radiation dose to only 8% of the original level and outperforms the state-of-art algorithms with peak signal-to-noise ratio improved by 32%. It reduces the oscillation in the residue functions, corrects over-estimation of cerebral blood flow (CBF) and under-estimation of mean transit time (MTT), and maintains the distinction between the deficit and normal regions
- …