15 research outputs found
Feature-preserving image restoration and its application in biological fluorescence microscopy
This thesis presents a new investigation of image restoration and its application to
fluorescence cell microscopy. The first part of the work is to develop advanced image
denoising algorithms to restore images from noisy observations by using a novel featurepreserving
diffusion approach. I have applied these algorithms to different types of
images, including biometric, biological and natural images, and demonstrated their
superior performance for noise removal and feature preservation, compared to several
state of the art methods. In the second part of my work, I explore a novel, simple and
inexpensive super-resolution restoration method for quantitative microscopy in cell
biology. In this method, a super-resolution image is restored, through an inverse process,
by using multiple diffraction-limited (low) resolution observations, which are acquired
from conventional microscopes whilst translating the sample parallel to the image plane,
so referred to as translation microscopy (TRAM). A key to this new development is the
integration of a robust feature detector, developed in the first part, to the inverse process
to restore high resolution images well above the diffraction limit in the presence of strong
noise. TRAM is a post-image acquisition computational method and can be implemented
with any microscope. Experiments show a nearly 7-fold increase in lateral spatial
resolution in noisy biological environments, delivering multi-colour image resolution of
~30 nm
Adaptive Representations for Image Restoration
In the �eld of image processing, building good representation models for
natural images is crucial for various applications, such as image restora-
tion, sampling, segmentation, etc. Adaptive image representation models
are designed for describing the intrinsic structures of natural images. In
the classical Bayesian inference, this representation is often known as the
prior of the intensity distribution of the input image. Early image priors
have forms such as total variation norm, Markov Random Fields (MRF),
and wavelets. Recently, image priors obtained from machine learning tech-
niques tend to be more adaptive, which aims at capturing the natural image
models via learning from larger databases. In this thesis, we study adaptive
representations of natural images for image restoration.
The purpose of image restoration is to remove the artifacts which degrade
an image. The degradation comes in many forms such as image blurs,
noises, and artifacts from the codec. Take image denoising for an example.
There are several classic representation methods which can generate state-
of-the-art results. The �rst one is the assumption of image self-similarity.
However, this representation has the issue that sometimes the self-similarity
assumption would fail because of high noise levels or unique image contents.
The second one is the wavelet based nonlocal representation, which also has
a problem in that the �xed basis function is not adaptive enough for any
arbitrary type of input images. The third is the sparse coding using over-
complete dictionaries, which does not have the hierarchical structure that is
similar to the one in human visual system and is therefore prone to denoising
artifacts.
My research started from image denoising. Through the thorough review
and evaluation of state-of-the-art denoising methods, it was found that the representation of images is substantially important for the denoising tech-
nique. At the same time, an improvement on one of the nonlocal denoising
method was proposed, which improves the representation of images by the
integration of Gaussian blur, clustering and Rotationally Invariant Block
Matching. Enlightened by the successful application of sparse coding in
compressive sensing, we exploited the image self-similarity by using a sparse
representation based on wavelet coe�cients in a nonlocal and hierarchical
way, which generates competitive results compared to the state-of-the-art
denoising algorithms. Meanwhile, another adaptive local �lter learned by
Genetic Programming (GP) was proposed for e�cient image denoising. In
this work, we employed GP to �nd the optimal representations for local im-
age patches through training on massive datasets, which yields competitive
results compared to state-of-the-art local denoising �lters. After success-
fully dealt with the denoising part, we moved to the parameter estimation
for image degradation models. For instance, image blur identi�cation uses
deep learning, which has recently been proposed as a popular image repre-
sentation approach. This work has also been extended to blur estimation
based on the fact that the second step of the framework has been replaced
with general regression neural network. In a word, in this thesis, spatial cor-
relations, sparse coding, genetic programming, deep learning are explored
as adaptive image representation models for both image restoration and
parameter estimation.
We conclude this thesis by considering methods based on machine learning
to be the best adaptive representations for natural images. We have shown
that they can generate better results than conventional representation mod-
els for the tasks of image denoising and deblurring
Deep learning for inverse problems in remote sensing: super-resolution and SAR despeckling
L'abstract è presente nell'allegato / the abstract is in the attachmen
Smoothing of ultrasound images using a new selective average filter
Ultrasound images are strongly affected by speckle noise making visual and computational analysis of the
structures more difficult. Usually, the interference caused by this kind of noise reduces the efficiency of
extraction and interpretation of the structural features of interest. In order to overcome this problem, a
new method of selective smoothing based on average filtering and the radiation intensity of the image
pixels is proposed. The main idea of this new method is to identify the pixels belonging to the borders
of the structures of interest in the image, and then apply a reduced smoothing to these pixels, whilst
applying more intense smoothing to the remaining pixels. Experimental tests were conducted using synthetic
ultrasound images with speckle noisy added and real ultrasound images from the female pelvic
cavity. The new smoothing method is able to perform selective smoothing in the input images, enhancing
the transitions between the different structures presented. The results achieved are promising, as the
evaluation analysis performed shows that the developed method is more efficient in removing speckle
noise from the ultrasound images compared to other current methods. This improvement is because it is
able to adapt the filtering process according to the image contents, thus avoiding the loss of any relevant
structural features in the input images
Redução de ruído em vídeos em tempo real baseado na fusão do filtro de Kalman e filtro bilateral
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2016.Neste trabalho é proposto um filtro para atenuação de ruído em vídeos, em tempo real, baseado na fusão de uma modificação do filtro de Kalman e do filtro bilateral, de forma a aproveitar características espaciais e temporais das imagens, preservando contornos e características essenciais à visão humana e computacional. O algoritmo proposto, chamado STMKF, mantém as características do filtro de Kalman original para regiões onde não há movimento e aplica o filtro bilateral nas regiões onde ocorre movimento, fazendo o filtro de Kalman convergir mais rápido para os novos valores adquiridos. Os resultados experimentais mostraram que o filtro é competitivo em relação aos demais, principalmente onde o fundo da imagem é estacionário. A avaliação de desempenho em CPUs e GPUs mostrou sua viabilidade em tempo real, com a filtragem de aproximadamente 30 frames FullHD por segundo em um Intel i7 e mais de 1000 FPS para um video 480p em GPU.Abstract : In this work is proposed a filter to minimize noise in videos, in real time, based on the fusion of a modified Kalman Filter and a bilateral filter, taking advantage of statial and temporal characteristics of the images, preserving contours and essential features for human and computer vision. The proposed algorithm, called STMKF, maintains the original Kalman filter characteristics in motionless regions and it applies the bilateral filter in regions with motion, which make the Kalman filter converge faster for the new acquired values. The experimental results show that the proposed filter is competitive in relation to others, mainly in videos with more static backgrounds. The performance evaluation in CPUs and GPUs shows that STMKF is viable in real time, filtering approximately 30 FullHD frames per second in a Intel i7 and over 1000 FPS of a 480p video on a GPU
Cross-Modality Feature Learning for Three-Dimensional Brain Image Synthesis
Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors such as patient discomfort, increased cost, prolonged scanning time and scanner unavailability. In addition, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. Moreover, independently of how well an imaging system is, the performance of the imaging equipment usually comes to a certain limit through different physical devices. Additional interferences arise (particularly for medical imaging systems), for example, limited acquisition times, sophisticated and costly equipment and patients with severe medical conditions, which also cause image degradation. The acquisitions can be considered as the degraded version of the original high-quality images.
In this dissertation, we explore the problems of image super-resolution and cross-modality synthesis for one Magnetic Resonance Imaging (MRI) modality from an image of another MRI modality of the same subject using an image synthesis framework for reconstructing the missing/complex modality data. We develop models and techniques that allow us to connect the domain of source modality data and the domain of target modality data, enabling transformation between elements of
the two domains. In particular, we first introduce the models that project both source modality data and target modality data into a common multi-modality feature space in a supervised setting. This common space then allows us to connect cross-modality features that depict a relationship between each other, and we can impose the learned association function that synthesizes any target modality image. Moreover, we develop a weakly-supervised method that takes a few registered multi-modality image pairs as training data and generates the desired modality data without being constrained a large number of multi-modality images collection of well-processed (\textit{e.g.}, skull-stripped and strictly registered) brain data. Finally, we propose an approach that provides a generic way of learning a dual mapping between source and target domains while considering both visually high-fidelity synthesis and task-practicability. We demonstrate that this model can be used to take any arbitrary modality and efficiently synthesize the desirable modality data in an unsupervised manner.
We show that these proposed models advance the state-of-the-art on image super-resolution and cross-modality synthesis tasks that need jointly processing of multi-modality images and that we can design the algorithms in ways to generate the practically beneficial data to medical image analysis