203 research outputs found
Compressed Sensing Based Reconstruction Algorithm for X-ray Dose Reduction in Synchrotron Source Micro Computed Tomography
Synchrotron computed tomography requires a large number of angular projections to reconstruct tomographic images with high resolution for detailed and accurate diagnosis. However, this exposes the specimen to a large amount of x-ray radiation. Furthermore, this increases scan time and, consequently, the likelihood of involuntary specimen movements. One approach for decreasing the total scan time and radiation dose is to reduce the number of projection views needed to reconstruct the images. However, the aliasing artifacts appearing in the image due to the reduced number of projection data, visibly degrade the image quality. According to the compressed sensing theory, a signal can be accurately reconstructed from highly undersampled data by solving an optimization problem, provided that the signal can be sparsely represented in a predefined transform domain. Therefore, this thesis is mainly concerned with designing compressed sensing-based reconstruction algorithms to suppress aliasing artifacts while preserving spatial resolution in the resulting reconstructed image. First, the reduced-view synchrotron computed tomography reconstruction is formulated as a total variation regularized compressed sensing problem. The Douglas-Rachford Splitting and the randomized Kaczmarz methods are utilized to solve the optimization problem of the compressed sensing formulation.
In contrast with the first part, where consistent simulated projection data are generated for image reconstruction, the reduced-view inconsistent real ex-vivo synchrotron absorption contrast micro computed tomography bone data are used in the second part. A gradient regularized compressed sensing problem is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The wavelet image denoising algorithm is used as the post-processing algorithm to attenuate the unwanted staircase artifact generated by the reconstruction algorithm.
Finally, a noisy and highly reduced-view inconsistent real in-vivo synchrotron phase-contrast computed tomography bone data are used for image reconstruction. A combination of prior image constrained compressed sensing framework, and the wavelet regularization is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The prior image constrained compressed sensing framework takes advantage of the prior image to promote the sparsity of the target image. It may lead to an unwanted staircase artifact when applied to noisy and texture images, so the wavelet regularization is used to attenuate the unwanted staircase artifact generated by the prior image constrained compressed sensing reconstruction algorithm.
The visual and quantitative performance assessments with the reduced-view simulated and real computed tomography data from canine prostate tissue, rat forelimb, and femoral cortical bone samples, show that the proposed algorithms have fewer artifacts and reconstruction errors than other conventional reconstruction algorithms at the same x-ray dose
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Image-Domain Material Decomposition for Dual-energy CT using Unsupervised Learning with Data-fidelity Loss
Background: Dual-energy CT (DECT) and material decomposition play vital roles
in quantitative medical imaging. However, the decomposition process may suffer
from significant noise amplification, leading to severely degraded image
signal-to-noise ratios (SNRs). While existing iterative algorithms perform
noise suppression using different image priors, these heuristic image priors
cannot accurately represent the features of the target image manifold. Although
deep learning-based decomposition methods have been reported, these methods are
in the supervised-learning framework requiring paired data for training, which
is not readily available in clinical settings.
Purpose: This work aims to develop an unsupervised-learning framework with
data-measurement consistency for image-domain material decomposition in DECT
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging
In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies
Sampling the Multiple Facets of Light
The theme of this thesis revolves around three important manifestations of light, namely its corpuscular, wave and electromagnetic nature. Our goal is to exploit these principles to analyze, design and build imaging modalities by developing new signal processing and algorithmic tools, based in particular on sampling and sparsity concepts.
First, we introduce a new sampling scheme called variable pulse width, which is based on the finite rate of innovation (FRI) sampling paradigm. This new framework enables to sample and perfectly reconstruct weighted sums of Lorentzians; perfect reconstruction from sampled signals is guaranteed by a set of theorems.
Second, we turn to the context of light and study its reflection, which is based on the corpuscular model of light. More precisely, we propose to use our FRI-based model to represent bidirectional reflectance distribution functions. We develop dedicated light domes to acquire reflectance functions and use the measurements obtained to demonstrate the usefulness and versatility of our model. In particular, we concentrate on the representation of specularities, which are sharp and bright components generated by the direct reflection of light on surfaces.
Third, we explore the wave nature of light through Lippmann photography, a century-old photography technique that acquires the entire spectrum of visible light. This fascinating process captures interferences patterns created by the exposed scene inside the depth of a photosensitive plate. By illuminating the developed plate with a neutral light source, the reflected spectrum corresponds to that of the exposed scene. We propose a mathematical model which precisely explains the technique and demonstrate that the spectrum reproduction suffers from a number of distortions due to the finite depth of the plate and the choice of reflector. In addition to describing these artifacts, we describe an algorithm to invert them, essentially recovering the original spectrum of the exposed scene.
Next, the wave nature of light is further generalized to the electromagnetic theory, which we invoke to leverage the concept of polarization of light. We also return to the topic of the representation of reflectance functions and focus this time on the separation of the specular component from the other reflections. We exploit the fact that the polarization of light is preserved in specular reflections and investigate camera designs with polarizing micro-filters with different orientations placed just in front of the camera sensor; the different polarizations of the filters create a mosaic image, from which we propose to extract the specular component. We apply our demosaicing method to several scenes and additionally demonstrate that our approach improves photometric stereo.
Finally, we delve into the problem of retrieving the phase information of a sparse signal from the magnitude of its Fourier transform. We propose an algorithm that resolves the phase retrieval problem for sparse signals in three stages. Unlike traditional approaches that recover a discrete approximation of the underlying signal, our algorithm estimates the signal on a continuous domain, which makes it the first of its kind.
The concluding chapter outlines several avenues for future research, like new optical devices such as displays and digital cameras, inspired by the topic of Lippmann photography
Development of a Python Library for Processing Seismic Time Series
Earthquakes occur around the world every day. This natural phenomena can result in
enormous destruction and loss of life. However, at the same time, it is the primary source
for studying Earth, the active planet. The seismic waves generated by earthquakes propagate deep into the Earth, carrying considerable information about the Earth’s structure,
from the shallow depths in the crust to the core. The information transferred by seismic
waves needs advanced signal processing and inversion tools to be converted into useful information about the Earths inner structures, from local to global scales. The everevolving
interest for investigating more accurately the terrestrial system led to the development of
advanced signal processing algorithms to extract optimal information from the recorded
seismic waveforms. These algorithms use advanced numerical modeling to extract optimal information from the different seismic phases generated by earthquakes. The development of algorithms from a mathematicalphysical point of view is of great interest; on
the other hand, developing a platform for their implementation is also significant.
This research aims to build a bridge between the development of purely theoretical ideas
in seismology and their functional implementation. In this dissertation SeisPolPy, a high
quality Pythonbased library for processing seismic waveforms is developed. It consists
of the latest polarization analysis and filter algorithms to extract different seismic phases
in the recorded seismograms. The algorithms range from the most common algorithms in
the literature to a newly developed method, sparsitypromoting timefrequency filtering.
In addition, the focus of the work is on the generation of highquality synthetic seismic
data for testing and evaluating the algorithms. SeisPolPy library, aims to provide seismology community a tool for separation of seismic phases by using highresolution polarization analysis and filtering techniques. The research work is carried out within the
framework of the Seismicity and HAzards of the subsaharian Atlantic Margin (SHAZAM)
project that requires high quality algorithms able to process the limited seismic data available in the Gulf of Guinea, the study area of the SHAZAM project.Terramotos ocorrem todos os dias em todo o mundo. Esta fenomeno natural pode vir
a resultar numa enorme destruição e perda de vidas. No entanto, ao mesmo tempo, é a
principal fonte para o estudo da Terra, o planeta activo. As ondas sísmicas geradas pelos terramotos propagamse profundamente na Terra, levando informação considerável
sobre a estrutura da Terra, desde as zonas de menor profundidade da crosta até ao núcleo. A informação transferida por ondas sísmicas necessita de processamento avançado
de sinais e ferramentas de inversão para ser convertida em informação util sobre a estrutura interna da Terra, desde escalas locais a globais. O interesse sempre crescente em
investigar com maior precisão o sistema terrestre levou ao desenvolvimento de algoritmos avançados de processamento de sinais para extrair informação óptima das formas de
ondas sísmicas registadas. Estes algoritmos fazem uso de modelos numéricos avançados
para extrair informação óptima das diferentes fases sísmicas geradas pelos terramotos. O
desenvolvimento de algoritmos de um ponto de vista matemáticofísico é de grande interesse; por outro lado, o desenvolvimento de uma plataforma para a sua implementação
é também significativo.
Esta investigação visa construir uma ponte entre o desenvolvimento de ideias puramente
teóricas em sismologia e a sua implementação funcional. Com o decorrer desta dissertação foi desenvolvido o SeisPolPy, uma biblioteca de alta qualidade baseada em Python
para o processamento de formas de ondas sísmicas. Consiste na mais recente análise de
polarização e algoritmos de filtragem para extrair diferentes fases sísmicas nos sismogramas registados. Os algoritmos variam desde os algoritmos mais comuns na literatura até
um método recentemente desenvolvido, que promove a frequência de filtragem por tempo
e frequência. Além disso, o foco do trabalho é a geração de dados sísmicos sintéticos de
alta qualidade para testar e avaliar os algoritmos. A biblioteca SeisPolPy, visa fornecer à
comunidade sismológica uma ferramenta para a separação das fases sísmicas, utilizando
técnicas de análise de polarização e filtragem de alta resolução. O trabalho de investigação
é realizado no âmbito do projecto SHAZAM que requer algoritmos de alta qualidade que
possuam a capacidade de processar os dados sísmicos, limitados, disponíveis no Golfo da
Guiné, a área de estudo do projecto
- …