3,737 research outputs found
A New Image Quality Database for Multiple Industrial Processes
Recent years have witnessed a broader range of applications of image
processing technologies in multiple industrial processes, such as smoke
detection, security monitoring, and workpiece inspection. Different kinds of
distortion types and levels must be introduced into an image during the
processes of acquisition, compression, transmission, storage, and display,
which might heavily degrade the image quality and thus strongly reduce the
final display effect and clarity. To verify the reliability of existing image
quality assessment methods, we establish a new industrial process image
database (IPID), which contains 3000 distorted images generated by applying
different levels of distortion types to each of the 50 source images. We
conduct the subjective test on the aforementioned 3000 images to collect their
subjective quality ratings in a well-suited laboratory environment. Finally, we
perform comparison experiments on IPID database to investigate the performance
of some objective image quality assessment algorithms. The experimental results
show that the state-of-the-art image quality assessment methods have difficulty
in predicting the quality of images that contain multiple distortion types
How is Gaze Influenced by Image Transformations? Dataset and Model
Data size is the bottleneck for developing deep saliency models, because
collecting eye-movement data is very time consuming and expensive. Most of
current studies on human attention and saliency modeling have used high quality
stereotype stimuli. In real world, however, captured images undergo various
types of transformations. Can we use these transformations to augment existing
saliency datasets? Here, we first create a novel saliency dataset including
fixations of 10 observers over 1900 images degraded by 19 types of
transformations. Second, by analyzing eye movements, we find that observers
look at different locations over transformed versus original images. Third, we
utilize the new data over transformed images, called data augmentation
transformation (DAT), to train deep saliency models. We find that label
preserving DATs with negligible impact on human gaze boost saliency prediction,
whereas some other DATs that severely impact human gaze degrade the
performance. These label preserving valid augmentation transformations provide
a solution to enlarge existing saliency datasets. Finally, we introduce a novel
saliency model based on generative adversarial network (dubbed GazeGAN). A
modified UNet is proposed as the generator of the GazeGAN, which combines
classic skip connections with a novel center-surround connection (CSC), in
order to leverage multi level features. We also propose a histogram loss based
on Alternative Chi Square Distance (ACS HistLoss) to refine the saliency map in
terms of luminance distribution. Extensive experiments and comparisons over 3
datasets indicate that GazeGAN achieves the best performance in terms of
popular saliency evaluation metrics, and is more robust to various
perturbations. Our code and data are available at:
https://github.com/CZHQuality/Sal-CFS-GAN
Multimodal enhancement-fusion technique for natural images.
Masters Degree. University of KwaZulu-Natal, Durban.This dissertation presents a multimodal enhancement-fusion (MEF) technique for natural images. The MEF is expected to contribute value to machine vision applications and personal image collections for the human user. Image enhancement techniques and the metrics that are used to assess their performance are prolific, and each is usually optimised for a specific objective. The MEF proposes a framework that adaptively fuses multiple enhancement objectives into a seamless pipeline. Given a segmented input image and a set of enhancement methods, the MEF applies all the enhancers to the image in parallel. The most appropriate enhancement in each image segment is identified, and finally, the differentially enhanced segments are seamlessly fused. To begin with, this dissertation studies targeted contrast enhancement methods and performance metrics that can be utilised in the proposed MEF. It addresses a selection of objective assessment metrics for contrast-enhanced images and determines their relationship with the subjective assessment of human visual systems. This is to identify which objective metrics best approximate human assessment and may therefore be used as an effective replacement for tedious human assessment surveys. A subsequent human visual assessment survey is conducted on the same dataset to ascertain image quality as perceived by a human observer. The interrelated concepts of naturalness and detail were found to be key motivators of human visual assessment. Findings show that when assessing the quality or accuracy of these methods, no single quantitative metric correlates well with human perception of naturalness and detail, however, a combination of two or more metrics may be used to approximate the complex human visual response.
Thereafter, this dissertation proposes the multimodal enhancer that adaptively selects the optimal enhancer for each image segment. MEF focusses on improving chromatic irregularities such as poor contrast distribution. It deploys a concurrent enhancement pathway that subjects an image to multiple image enhancers in parallel, followed by a fusion algorithm that creates a composite image that combines the strengths of each enhancement path. The study develops a framework for parallel image enhancement, followed by parallel image assessment and selection, leading to final merging of selected regions from the enhanced set. The output combines desirable attributes from each enhancement pathway to produce a result that is superior to each path taken alone. The study showed that the proposed MEF technique performs well for most image types. MEF is subjectively favourable to a human panel and achieves better performance for objective image quality assessment compared to other enhancement methods
Extended object reconstruction in adaptive-optics imaging: the multiresolution approach
We propose the application of multiresolution transforms, such as wavelets
(WT) and curvelets (CT), to the reconstruction of images of extended objects
that have been acquired with adaptive optics (AO) systems. Such multichannel
approaches normally make use of probabilistic tools in order to distinguish
significant structures from noise and reconstruction residuals. Furthermore, we
aim to check the historical assumption that image-reconstruction algorithms
using static PSFs are not suitable for AO imaging. We convolve an image of
Saturn taken with the Hubble Space Telescope (HST) with AO PSFs from the 5-m
Hale telescope at the Palomar Observatory and add both shot and readout noise.
Subsequently, we apply different approaches to the blurred and noisy data in
order to recover the original object. The approaches include multi-frame blind
deconvolution (with the algorithm IDAC), myopic deconvolution with
regularization (with MISTRAL) and wavelets- or curvelets-based static PSF
deconvolution (AWMLE and ACMLE algorithms). We used the mean squared error
(MSE) and the structural similarity index (SSIM) to compare the results. We
discuss the strengths and weaknesses of the two metrics. We found that CT
produces better results than WT, as measured in terms of MSE and SSIM.
Multichannel deconvolution with a static PSF produces results which are
generally better than the results obtained with the myopic/blind approaches
(for the images we tested) thus showing that the ability of a method to
suppress the noise and to track the underlying iterative process is just as
critical as the capability of the myopic/blind approaches to update the PSF.Comment: In revision in Astronomy & Astrophysics. 19 pages, 13 figure
- …