193 research outputs found
Deep Learning frameworks for Image Quality Assessment
Technology is advancing by the arrival of deep learning and it finds huge application in image
processing also. Deep learning itself sufficient to perform over all the statistical methods. As a
research work, I implemented image quality assessment techniques using deep learning. Here I
proposed two full reference image quality assessment algorithms and two no reference image quality
algorithms. Among the two algorithms on each method, one is in a supervised manner and other is
in an unsupervised manner.
First proposed method is the full reference image quality assessment using autoencoder. Existing
literature shows that statistical features of pristine images will get distorted in presence of distortion.
It will be more advantageous if algorithm itself learns the distortion discriminating features. It will
be more complex if the feature length is more. So autoencoder is trained using a large number of
pristine images. An autoencoder will give the best lower dimensional representation of the input.
It is showed that encoded distance features have good distortion discrimination properties. The
proposed algorithm delivers competitive performance over standard databases.
If we are giving both reference and distorted images to the model and the model learning itself
and gives the scores will reduce the load of extracting features and doing post-processing. But model
should be capable one for discriminating the features by itself. Second method which I proposed is
a full reference and no reference image quality assessment using deep convolutional neural networks.
A network is trained in a supervised manner with subjective scores as targets. The algorithm is
performing e�ciently for the distortions that are learned while training the model.
Last proposed method is a classiffication based no reference image quality assessment. Distortion
level in an image may vary from one region to another region. We may not be able to view distortion
in some part but it may be present in other parts. A classiffication model is able to tell whether a
given input patch is of low quality or high quality. It is shown that aggregate of the patch quality
scores is having a high correlation with the subjective scores
Image enhancement methods and applications in computational photography
Computational photography is currently a rapidly developing and cutting-edge topic in applied optics, image sensors and image processing fields to go beyond the limitations of traditional photography. The innovations of computational photography allow the photographer not only merely to take an image, but also, more importantly, to perform computations on the captured image data. Good examples of these innovations include high dynamic range imaging, focus stacking, super-resolution, motion deblurring and so on. Although extensive work has been done to explore image enhancement techniques in each subfield of computational photography, attention has seldom been given to study of the image enhancement technique of simultaneously extending depth of field and dynamic range of a scene. In my dissertation, I present an algorithm which combines focus stacking and high dynamic range (HDR) imaging in order to produce an image with both extended depth of field (DOF) and dynamic range than any of the input images. In this dissertation, I also investigate super-resolution image restoration from multiple images, which are possibly degraded by large motion blur. The proposed algorithm combines the super-resolution problem and blind image deblurring problem in a unified framework. The blur kernel for each input image is separately estimated. I also do not make any restrictions on the motion fields among images; that is, I estimate dense motion field without simplifications such as parametric motion. While the proposed super-resolution method uses multiple images to enhance spatial resolution from multiple regular images, single image super-resolution is related to techniques of denoising or removing blur from one single captured image. In my dissertation, space-varying point spread function (PSF) estimation and image deblurring for single image is also investigated. Regarding the PSF estimation, I do not make any restrictions on the type of blur or how the blur varies spatially. Once the space-varying PSF is estimated, space-varying image deblurring is performed, which produces good results even for regions where it is not clear what the correct PSF is at first. I also bring image enhancement applications to both personal computer (PC) and Android platform as computational photography applications
Image Restoration
This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Filter-Based Probabilistic Markov Random Field Image Priors: Learning, Evaluation, and Image Analysis
Markov random fields (MRF) based on linear filter responses are one of the most popular forms for modeling image priors due to their rigorous probabilistic interpretations and versatility in various applications. In this dissertation, we propose an application-independent method to quantitatively evaluate MRF image priors using model samples. To this end, we developed an efficient auxiliary-variable Gibbs samplers for a general class of MRFs with flexible potentials. We found that the popular pairwise and high-order MRF priors capture image statistics quite roughly and exhibit poor generative properties. We further developed new learning strategies and obtained high-order MRFs that well capture the statistics of the inbuilt features, thus being real maximum-entropy models, and other important statistical properties of natural images, outlining the capabilities of MRFs. We suggest a multi-modal extension of MRF potentials which not only allows to train more expressive priors, but also helps to reveal more insights of MRF variants, based on which we are able to train compact, fully-convolutional restricted Boltzmann machines (RBM) that can model visual repetitive textures even better than more complex and deep models.
The learned high-order MRFs allow us to develop new methods for various real-world image analysis problems. For denoising of natural images and deconvolution of microscopy images, the MRF priors are employed in a pure generative setting. We propose efficient sampling-based methods to infer Bayesian minimum mean squared error (MMSE) estimates, which substantially outperform maximum a-posteriori (MAP) estimates and can compete with state-of-the-art discriminative methods. For non-rigid registration of live cell nuclei in time-lapse microscopy images, we propose a global optical flow-based method. The statistics of noise in fluorescence microscopy images are studied to derive an adaptive weighting scheme for increasing model robustness. High-order MRFs are also employed to train image filters for extracting important features of cell nuclei and the deformation of nuclei are then estimated in the learned feature spaces. The developed method outperforms previous approaches in terms of both registration accuracy and computational efficiency
A multi-frame super-resolution algorithm using pocs and wavelet
Super-Resolution (SR) is a generic term, referring to a series of digital image processing techniques in which a high resolution (HR) image is reconstructed from a set of low resolution (LR) video frames or images. In other words, a HR image is obtained by integrating several LR frames captured from the same scene within a very short period of time. Constructing a SR image is a process that may require a lot of computational resources. To solve this problem, the SR reconstruction process involves 3 steps, namely image registration, degrading function estimation and image restoration. In this thesis, the fundamental process steps in SR image reconstruction algorithms are first introduced. Several known SR image reconstruction approaches are then discussed in detail. These SR reconstruction methods include: (1) traditional interpolation, (2) the frequency domain approach, (3) the inverse back-projection (IBP), (4) the conventional projections onto convex sets (POCS) and (5) regularized inverse optimization. Based on the analysis of some of the existing methods, a Wavelet-based POCS SR image reconstruction method is proposed. The new method is an extension of the conventional POCS method, that performs some convex projection operations in the Wavelet domain. The stochastic Wavelet coefficient refinement technique is used to adjust the Wavelet sub-image coefficients of the estimated HR image according to the stochastic F-distribution in order to eliminate the noisy or wrongly estimated pixels. The proposed SR method enhances the resulting quality of the reconstructed HR image, while retaining the simplicity of the conventional POCS method as well as increasing the convergence speed of POCS iterations. Simulation results show that the proposed Wavelet-based POCS iterative algorithm has led to some distinct features and performance improvement as compared to some of the SR approaches reviewed in this thesis
Recommended from our members
Single atom imaging with time-resolved electron microscopy
Developments in scanning transmission electron microscopy (STEM) have opened
up new possibilities for time-resolved imaging at the atomic scale. However, rapid
imaging of single atom dynamics brings with it a new set of challenges, particularly
regarding noise and the interaction between the electron beam and the specimen. This
thesis develops a set of analytical tools for capturing atomic motion and analyzing the
dynamic behaviour of materials at the atomic scale.
Machine learning is increasingly playing an important role in the analysis of electron
microscopy data. In this light, new unsupervised learning tools are developed here for
noise removal under low-dose imaging conditions and for identifying the motion of
surface atoms. The scope for real-time processing and analysis is also explored, which is
of rising importance as electron microscopy datasets grow in size and complexity.
These advances in image processing and analysis are combined with computational
modelling to uncover new chemical and physical insights into the motion of atoms
adsorbed onto surfaces. Of particular interest are systems for heterogeneous catalysis,
where the catalytic activity can depend intimately on the atomic environment. The
study of Cu atoms on a graphene oxide support reveals that the atoms undergo
anomalous diffusion as a result of spatial and energetic disorder present in the substrate.
The investigation is extended to examine the structure and stability of small Cu clusters
on graphene oxide, with atomistic modelling used to understand the significant role
played by the substrate. Finally, the analytical methods are used to study the surface
reconstruction of silicon alongside the electron beam-induced motion of adatoms on
the surface.
Taken together, these studies demonstrate the materials insights that can be obtained
with time-resolved STEM imaging, and highlight the importance of combining state-ofthe-
art imaging with computational analysis and atomistic modelling to quantitatively
characterize the behaviour of materials with atomic resolution.The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007–2013)/ERC grant agreement 291522–3DIMAGE, as well as from the European Union Seventh Framework Programme under Grant Agreement 312483-ESTEEM2 (Integrated Infrastructure Initiative -I3)
Towards Joint Super-Resolution and High Dynamic Range Image Reconstruction
The main objective for digital image- and video camera systems is to reproduce a real-world scene in such a way that a high visual quality is obtained. A crucial aspect in this regard is, naturally, the quality of the hardware components of the camera device. There are, however, always some undesired limitations imposed by the sensor of the camera. To begin with, the dynamic range of light intensities that the sensor can capture in its nonsaturated region is much smaller than the dynamic range of most common daylight scenes. Secondly, the achievable spatial resolution of the camera is limited, especially for video capture with a high frame rate. Signal processing software algorithms can be used that fuse the information from a sequence of images into one enhanced image. Thus, the dynamic range limitation can be overcome, and the spatial resolution can be improved.
This thesis discusses different methods that utilize data from a set of multiple images, that exhibits photometric diversity, spatial diversity, or both. For the case where the images are differently exposed, photometric alignment is performed prior to reconstructing an image of a higher dynamic range. For the case where there is spatial diversity, a Super-Resolution reconstruction method is applied, in which an inverse problem is formulated and solved to obtain a high resolution reconstruction result. For either case, as well as for the optimistic and promising combination of the two methods, the problem formulation should consider how the scene information is perceived by humans. Incorporating the properties of the human vision system in novel mathematical formulations for joint high dynamic range and high resolution image reconstruction is the main contribution of the thesis, in particular of the published papers that are included. The potential usefulness of high dynamic range image reconstruction on the one hand, and Super-Resolution image reconstruction on the other, are demonstrated. Finally, the combination of the two is discussed and results from simulations are given
- …