25 research outputs found
Weighted Mean Curvature
In image processing tasks, spatial priors are essential for robust
computations, regularization, algorithmic design and Bayesian inference. In
this paper, we introduce weighted mean curvature (WMC) as a novel image prior
and present an efficient computation scheme for its discretization in practical
image processing applications. We first demonstrate the favorable properties of
WMC, such as sampling invariance, scale invariance, and contrast invariance
with Gaussian noise model; and we show the relation of WMC to area
regularization. We further propose an efficient computation scheme for
discretized WMC, which is demonstrated herein to process over 33.2
giga-pixels/second on GPU. This scheme yields itself to a convolutional neural
network representation. Finally, WMC is evaluated on synthetic and real images,
showing its superiority quantitatively to total-variation and mean curvature.Comment: 12 page
A Novel Adaptive Probabilistic Nonlinear Denoising Approach for Enhancing PET Data Sinogram
We propose filtering the PET sinograms with a constraint curvature motion diffusion. The edge-stopping function is computed in terms of edge probability under the assumption of contamination by Poisson noise. We show that the Chi-square is the appropriate prior for finding the edge probability in the sinogram noise-free gradient. Since the sinogram noise is uncorrelated and follows a Poisson distribution, we then propose an adaptive probabilistic diffusivity function where the edge probability is computed at each pixel. The filter is applied on the 2D sinogram prereconstruction. The PET images are reconstructed using the Ordered Subset Expectation Maximization (OSEM). We demonstrate through simulations with images contaminated by Poisson noise that the performance of the proposed method substantially surpasses that of recently published methods, both visually and in terms of statistical measures
Large Scale Inverse Problems
This book is thesecond volume of a three volume series recording the "Radon Special Semester 2011 on Multiscale Simulation & Analysis in Energy and the Environment" that took placein Linz, Austria, October 3-7, 2011. This volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications. The solution of inverse problems is fundamental to a wide variety of applications such as weather forecasting, medical tomography, and oil exploration. Regularisation techniques are needed to ensure solutions of sufficient quality to be useful, and soundly theoretically based. This book addresses the common techniques required for all the applications, and is thus truly interdisciplinary. This collection of survey articles focusses on the large inverse problems commonly arising in simulation and forecasting in the earth sciences
Spatial priors for tomographic reconstructions from limited data
Tomografie is het reconstrueren van het inwendige van een object a.d.h.v externe metingen, b.v. beelden verkregen met X-stralen of microgolven. Deze thesis bekijkt de specifieke aspecten van microgolftomografie en magnetische resonantie beeldvorming (Magnetic Resonance Imaging – MRI); beide technieken zijn onschadelijk voor de mens. Terwijl het gebruik van MRI wijdverspreid is voor veel klinische toepassingen, is microgolftomografie nog niet in klinisch gebruik ondanks zijn potentiële voordelen. Door de lage kost en draagbaarheid van de toestellen is het een waardevolle aanvulling aan het assortiment
Mathematics and Algorithms in Tomography
This is the eighth Oberwolfach conference on the mathematics of tomography. Modalities represented at the workshop included X-ray tomography, sonar, radar, seismic imaging, ultrasound, electron microscopy, impedance imaging, photoacoustic tomography, elastography, vector tomography, and texture analysis
Super Resolution of HARDI images Using Compressed Sensing Techniques
Effective techniques of inferring the condition of neural tracts in the brain is invaluable for clinicians and researchers towards investigation of neurological disorders in patients. It was not until the advent of diffusion Magnetic Resonance Imaging (dMRI), a noninvasive imaging method used to detect the diffusion of water molecules, that scientists have been able to assess the characteristics of cerebral diffusion in vivo. Among different dMRI methods, High Angular Resolution Diffusion Imaging (HARDI) is well known for striking a balance between ability to distinguish crossing neural fibre tracts while requiring a modest number of diffusion measurements (which is directly related to acquisition time).
HARDI data provides insight into the directional properties of water diffusion in cerebral matter as a function of spatial coordinates. Ideally, one would be interested in having this information available at fine spatial resolution while minimizing the probing along different spatial orientations (so as to minimize the acquisition time). Unfortunately, availability of such datasets in reasonable acquisition times are hindered by limitations in current hardware and scanner protocols. On the other hand, post processing techniques prove promising in increasing the effective spatial resolution, allowing more detailed depictions of cerebral matter, while keeping the number of diffusion measurements within a feasible range.
In light of the preceding developments, the main purpose of this research is to look into super resolution of HARDI data, using the modern theory of compressed sensing. The method proposed in this thesis allows an accurate approximation of HARDI signals at a higher spatial resolution compared to data obtained with a typical scanner. At the same time, ideas for reducing the number of diffusion measurements in the angular domain to improve the acquisition time are explored. Accordingly, the novel method of applying two distinct compressed sensing approaches in both spatial and angular domain, and combining them into a single framework for performing super resolution forms the main contribution provided by this thesis
Using state-of-the-art inverse problem techniques to develop reconstruction methods for fluorescence diffuse optical
An inverse problem is a mathematical framework that is used to obtain info about a
physical object or system from observed measurements. It usually appears when we wish to
obtain information about internal data from outside measurements and has many
applications in science and technology such as medical imaging, geophysical imaging,
image deblurring, image inpainting, electromagnetic scattering, acoustics, machine
learning, mathematical finance, physics, etc.
The main goal of this PhD thesis was to use state-of-the-art inverse problem
techniques to develop modern reconstruction methods for solving the fluorescence diffuse
optical tomography (fDOT) problem. fDOT is a molecular imaging technique that enables
the quantification of tomographic (3D) bio-distributions of fluorescent tracers in small
animals.
One of the main difficulties in fDOT is that the high absorption and scattering
properties of biological tissues lead to an ill-posed inverse problem, yielding multiple nonunique
and unstable solutions to the reconstruction problem. Thus, the problem requires
regularization to achieve a stable solution.
The so called “non-contact fDOT scanners” are based on using CCDs as virtual
detectors instead of optic fibers in contact with the sample. These non-contact systems
generate huge datasets that lead to computationally demanding inverse problem. Therefore,
techniques to minimize the size of the acquired datasets without losing image performance
are highly advisable.
The first part of this thesis addresses the optimization of experimental setups to
reduce the dataset size, by using l₂–based regularization techniques. The second part, based
on the success of l₁ regularization techniques for denoising and image reconstruction, is devoted to advanced regularization problem using l₁–based techniques, and the last part
introduces compressed sensing (CS) theory, which enables further reduction of the
acquired dataset size.
The main contributions of this thesis are:
1) A feasibility study (the first one for fDOT to our knowledge) of the automatic Ucurve
method to select the regularization parameter (l₂–norm). The U-curve method has
shown to be an excellent automatic method to deal with large datasets because it reduces
the regularization parameter search to a suitable interval.
2) Once we found an automatic method to choose the l₂ regularization parameter for
fDOT, singular value analysis (SVA) of fDOT forward matrix was used to maximize the
information content in acquired measurements and minimize the computational cost. It was
shown for the first time that large meshes can be reduced in the z direction, without any
loss in imaging performance but reducing computational times and memory requirements.
3) Dealing with l₁–based regularization techniques, we presented a novel iterative
algorithm, ART-SB, that combines the advantage of Algebraic reconstruction method
(ART) in handling large datasets with Split Bregman (SB) denoising, an approach which
has been shown to be optimum for Total Variation (TV) denoising. SB has been
implemented in a cost-efficient way to handle large datasets. This makes ART-SB more
computationally efficient than previous TV-based reconstruction algorithms and most
splitting approaches.
4) Finally, we proposed a novel approach to CS for fDOT, named the SB-SVA
iterative method. This approach is based on the analysis-based co-sparse representation model, where an analysis operator multiplies the image transforming it in a sparse one.
Taking advantage of the CS-SB algorithm, we restrict the solution reached at each CS-SB
iteration to a certain space where the singular values of the forward matrix and the sparsity
structure combine in beneficial manner. In this way, SB-SVA forces indirectly the wellconditioninig
of the forward matrix while designing (learning) the analysis operator and
finding the solution. Furthermore, SB-SVA outperforms the CS-SB algorithm in terms of
image quality and needs fewer acquisition parameters.
The approaches presented here have been validated with experimental. -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------El problema inverso consiste en un conjunto de técnicas matemáticas para obtener
información sobre un fenómeno físico a partir de una serie de observaciones, medidas o
datos. Dicho problema aparece en muchas aplicaciones científicas y tecnológicas como
pueden ser imagen médica, imagen geofísica, acústica, aprendizaje máquina, física, etc.
El principal objetivo de esta tesis doctoral fue utilizar la teoría del problema inverso
para desarrollar nuevos métodos de reconstrucción para el problema de tomografía óptica
difusiva por fluorescencia (fDOT), también llamada tomografía molecular de fluorescencia
(FMT). fDOT es una modalidad de imagen médica que permite obtener de manera noinvasiva
la distribución espacial 3D de la concentración de sondas moleculares
fluorescentes en animales pequeños in-vivo.
Una de las dificultades principales del problema inverso en fDOT, es que, debido a
la alta difusión y absorción de los tejidos biológicos, es un problema fuertemente mal
condicionado. Su solución no es única y presenta fuertes inestabilidades, por lo que el
problema debe ser regularizado para obtener una solución estable.
Los llamados escáneres fDOT “sin contacto” se basan en utilizar cámaras CCD
como detectores virtuales, en vez de fibras ópticas en contacto con la muestras. Estos
sistemas, necesitan un volumen de datos muy elevado para obtener una buena calidad de
imagen y el coste computacional de hallar la solución llega a ser muy grande. Por esta
razón, es importante optimizar el sistema, es decir, maximizar la información contenida en
los datos adquiridos a la vez que minimizamos el coste computacional.
La primera parte de esta tesis se centra en optimizar el sistema de adquisición,
reduciendo el volumen de datos necesario usando técnicas de regularización basadas en la
norma l₂. La segunda parte se inspira en el gran éxito de las técnicas de regularización basadas en la norma l₁ para la reconstrucción de imagen, y se centra en regularizar el
problema fDOT mediante dichas técnicas. El trabajo finaliza introduciendo la técnica de
“compressed sensing” (CS), que permite también reducir el número de datos necesarios sin
por ello perder calidad de imagen.
Las contribuciones principales de esta tesis son:
1) Realización de un estudio de viabilidad, por primera vez en fDOT, del método
automático U-curva para seleccionar el parámetro de regularización (norma l₂). U-curva
mostró ser un método óptimo para problemas con un volumen elevado de datos, ya que
dicho método ofrece un intervalo donde encontrar el parámetro de regularización.
2) Una vez encontrado el método automático de selección de parámetro de
regularización se realizó un estudio de la matriz del sistema de fDOT basado en el análisis
de valores singulares (SVA), con la finalidad de maximizar la información contenida en los
datos adquiridos y minimizar el coste computacional. Por primera vez se demostró que el
uso de un mallado con menor densidad en la dirección perpendicular al plano obtiene
mejores resultados que el uso convencional de una distribución isotrópica del mismo.
3) En la segunda parte de esta tesis, usando técnicas de regularización basadas en la
norma l₁, se presenta un nuevo algoritmo iterativo, ART-SB, que combina la capacidad de
la técnica de reconstrucción algebraica (ART) para lidiar con problemas con muchos datos
con la efectividad del método Split Bregman (SB) para reducir ruido en la imagen
mediante su variación total (TV). SB fue implementado de forma eficiente para procesar un
elevado volumen de datos, de manera que ART-SB es computacionalmente más eficiente
que otros algoritmos de reconstrucción presentados previamente en la literatura, basados en la TV de la imagen y que la mayoría de las técnicas llamadas de “splitting”.
4) Finalmente, proponemos una nueva aproximación iterativa a CS para fDOT,
llamada SB-SVA. Esta aproximación se basa en el llamado modelo analítico co-disperso
(co-sparse), donde un operador analítico multiplica la imagen convirtiéndola en una
imagen dispersa. Este método aprovecha el método SB para CS (CS-SB) para restringir la
solución alcanzada en cada iteración a un espacio determinado, donde los valores
singulares de la matriz del sistema y la dispersión (“sparsity”) de la solución en dicha
iteración combinen beneficiosamente; es decir, donde valores singulares muy pequeños no
estén asociados a valores distintos de cero de la solución “sparse”. SB-SVA mejora el mal
condicionamiento de la matriz del sistema a la vez que diseña el operador apropiado a
través del cual la imagen se puede representar de forma dispersa y soluciona el problema de CS. Además, SB-SVA mostró mejores resultados que CS-SB en cuanto a calidad de
imagen, requiriendo menor número de parámetros de adquisición.
Todas las aproximaciones que presentamos en esta tesis fueron validadas con datos
experimentales
Joint methods in imaging based on diffuse image representations
This thesis deals with the application and the analysis of different variants of the Mumford-Shah model in the context of image processing. In this kind of models, a given function is approximated in a piecewise smooth or piecewise constant manner. Especially the numerical treatment of the discontinuities requires additional models that are also outlined in this work. The main part of this thesis is concerned with four different topics. Simultaneous edge detection and registration of two images: The image edges are detected with the Ambrosio-Tortorelli model, an approximation of the Mumford-Shah model that approximates the discontinuity set with a phase field, and the registration is based on these edges. The registration obtained by this model is fully symmetric in the sense that the same matching is obtained if the roles of the two input images are swapped. Detection of grain boundaries from atomic scale images of metals or metal alloys: This is an image processing problem from materials science where atomic scale images are obtained either experimentally for instance by transmission electron microscopy or by numerical simulation tools. Grains are homogenous material regions whose atomic lattice orientation differs from their surroundings. Based on a Mumford-Shah type functional, the grain boundaries are modeled as the discontinuity set of the lattice orientation. In addition to the grain boundaries, the model incorporates the extraction of a global elastic deformation of the atomic lattice. Numerically, the discontinuity set is modeled by a level set function following the approach by Chan and Vese. Joint motion estimation and restoration of motion-blurred video: A variational model for joint object detection, motion estimation and deblurring of consecutive video frames is proposed. For this purpose, a new motion blur model is developed that accurately describes the blur also close to the boundary of a moving object. Here, the video is assumed to consist of an object moving in front of a static background. The segmentation into object and background is handled by a Mumford-Shah type aspect of the proposed model. Convexification of the binary Mumford-Shah segmentation model: After considering the application of Mumford-Shah type models to tackle specific image processing problems in the previous topics, the Mumford-Shah model itself is studied more closely. Inspired by the work of Nikolova, Esedoglu and Chan, a method is developed that allows global minimization of the binary Mumford-Shah segmentation model by solving a convex, unconstrained optimization problem. In an outlook, segmentation of flowfields into piecewise affine regions using this convexification method is briefly discussed