28 research outputs found

    Image interpolation using Shearlet based iterative refinement

    Get PDF
    This paper proposes an image interpolation algorithm exploiting sparse representation for natural images. It involves three main steps: (a) obtaining an initial estimate of the high resolution image using linear methods like FIR filtering, (b) promoting sparsity in a selected dictionary through iterative thresholding, and (c) extracting high frequency information from the approximation to refine the initial estimate. For the sparse modeling, a shearlet dictionary is chosen to yield a multiscale directional representation. The proposed algorithm is compared to several state-of-the-art methods to assess its objective as well as subjective performance. Compared to the cubic spline interpolation method, an average PSNR gain of around 0.8 dB is observed over a dataset of 200 images

    Directional edge and texture representations for image processing

    Get PDF
    An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations

    水中イメージングシステムのための画質改善に関する研究

    Get PDF
    Underwater survey systems have numerous scientific or industrial applications in the fields of geology, biology, mining, and archeology. These application fields involve various tasks such as ecological studies, environmental damage assessment, and ancient prospection. During two decades, underwater imaging systems are mainly equipped by Underwater Vehicles (UV) for surveying in water or ocean. Challenges associated with obtaining visibility of objects have been difficult to overcome due to the physical properties of the medium. In the last two decades, sonar is usually used for the detection and recognition of targets in the ocean or underwater environment. However, because of the low quality of images by sonar imaging, optical vision sensors are then used instead of it for short range identification. Optical imaging provides short-range, high-resolution visual information of the ocean floor. However, due to the light transmission’s physical properties in the water medium, the optical imaged underwater images are usually performance as poor visibility. Light is highly attenuated when it travels in the ocean. Consequence, the imaged scenes result as poorly contrasted and hazy-like obstructions. The underwater imaging processing techniques are important to improve the quality of underwater images. As mentioned before, underwater images have poor visibility because of the medium scattering and light distortion. In contrast to common photographs, underwater optical images suffer from poor visibility owing to the medium, which causes scattering, color distortion, and absorption. Large suspended particles cause scattering similar to the scattering of light in fog or turbid water that contain many suspended particles. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient in the underwater environments are dominated by a bluish tone, because higher wavelengths are attenuated more quickly. Absorption of light in water substantially reduces its intensity. The random attenuation of light causes a hazy appearance as the light backscattered by water along the line of sight considerably degrades image contrast. Especially, objects at a distance of more than 10 meters from the observation point are almost unreadable because colors are faded as characteristic wavelengths, which are filtered according to the distance traveled by light in water. So, traditional image processing methods are not suitable for processing them well. This thesis proposes strategies and solutions to tackle the above mentioned problems of underwater survey systems. In this thesis, we contribute image pre-processing, denoising, dehazing, inhomogeneities correction, color correction and fusion technologies for underwater image quality improvement. The main content of this thesis is as follows. First, comprehensive reviews of the current and most prominent underwater imaging systems are provided in Chapter 1. A main features and performance based classification criterion for the existing systems is presented. After that, by analyzing the challenges of the underwater imaging systems, a hardware based approach and non-hardware based approach is introduced. In this thesis, we are concerned about the image processing based technologies, which are one of the non-hardware approaches, and take most recent methods to process the low quality underwater images. As the different sonar imaging systems applied in much equipment, such as side-scan sonar, multi-beam sonar. The different sonar acquires different images with different characteristics. Side-scan sonar acquires high quality imagery of the seafloor with very high spatial resolution but poor locational accuracy. On the contrast, multi-beam sonar obtains high precision position and underwater depth in seafloor points. In order to fully utilize all information of these two types of sonars, it is necessary to fuse the two kinds of sonar data in Chapter 2. Considering the sonar image forming principle, for the low frequency curvelet coefficients, we use the maximum local energy method to calculate the energy of two sonar images. For the high frequency curvelet coefficients, we take absolute maximum method as a measurement. The main attributes are: firstly, the multi-resolution analysis method is well adapted the cured-singularities and point-singularities. It is useful for sonar intensity image enhancement. Secondly, maximum local energy is well performing the intensity sonar images, which can achieve perfect fusion result [42]. In Chapter 3, as analyzed the underwater laser imaging system, a Bayesian Contourlet Estimator of Bessel K Form (BCE-BKF) based denoising algorithm is proposed. We take the BCE-BKF probability density function (PDF) to model neighborhood of contourlet coefficients. After that, according to the proposed PDF model, we design a maximum a posteriori (MAP) estimator, which relies on a Bayesian statistics representation of the contourlet coefficients of noisy images. The denoised laser images have better contrast than the others. There are three obvious virtues of the proposed method. Firstly, contourlet transform decomposition prior to curvelet transform and wavelet transform by using ellipse sampling grid. Secondly, BCE-BKF model is more effective in presentation of the noisy image contourlet coefficients. Thirdly, the BCE-BKF model takes full account of the correlation between coefficients [107]. In Chapter 4, we describe a novel method to enhance underwater images by dehazing. In underwater optical imaging, absorption, scattering, and color distortion are three major issues in underwater optical imaging. Light rays traveling through water are scattered and absorbed according to their wavelength. Scattering is caused by large suspended particles that degrade optical images captured underwater. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient underwater environments are dominated by a bluish tone. Our key contribution is to propose a fast image and video dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artificial lighting source into consideration [108]. In Chapter 5, we describe a novel method of enhancing underwater optical images or videos using guided multilayer filter and wavelength compensation. In certain circumstances, we need to immediately monitor the underwater environment by disaster recovery support robots or other underwater survey systems. However, due to the inherent optical properties and underwater complex environment, the captured images or videos are distorted seriously. Our key contributions proposed include a novel depth and wavelength based underwater imaging model to compensate for the attenuation discrepancy along the propagation path and a fast guided multilayer filtering enhancing algorithm. The enhanced images are characterized by a reduced noised level, better exposure of the dark regions, and improved global contrast where the finest details and edges are enhanced significantly [109]. The performance of the proposed approaches and the benefits are concluded in Chapter 6. Comprehensive experiments and extensive comparison with the existing related techniques demonstrate the accuracy and effect of our proposed methods.九州工業大学博士学位論文 学位記番号:工博甲第367号 学位授与年月日:平成26年3月25日CHAPTER 1 INTRODUCTION|CHAPTER 2 MULTI-SOURCE IMAGES FUSION|CHAPTER 3 LASER IMAGES DENOISING|CHAPTER 4 OPTICAL IMAGE DEHAZING|CHAPTER 5 SHALLOW WATER DE-SCATTERING|CHAPTER 6 CONCLUSIONS九州工業大学平成25年

    A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images

    Get PDF
    Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method

    Multiresolution image models and estimation techniques

    Get PDF

    A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity

    Full text link
    The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping "pictures". We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.Comment: 65 pages, 33 figures, 303 reference

    Feature-preserving image restoration and its application in biological fluorescence microscopy

    Get PDF
    This thesis presents a new investigation of image restoration and its application to fluorescence cell microscopy. The first part of the work is to develop advanced image denoising algorithms to restore images from noisy observations by using a novel featurepreserving diffusion approach. I have applied these algorithms to different types of images, including biometric, biological and natural images, and demonstrated their superior performance for noise removal and feature preservation, compared to several state of the art methods. In the second part of my work, I explore a novel, simple and inexpensive super-resolution restoration method for quantitative microscopy in cell biology. In this method, a super-resolution image is restored, through an inverse process, by using multiple diffraction-limited (low) resolution observations, which are acquired from conventional microscopes whilst translating the sample parallel to the image plane, so referred to as translation microscopy (TRAM). A key to this new development is the integration of a robust feature detector, developed in the first part, to the inverse process to restore high resolution images well above the diffraction limit in the presence of strong noise. TRAM is a post-image acquisition computational method and can be implemented with any microscope. Experiments show a nearly 7-fold increase in lateral spatial resolution in noisy biological environments, delivering multi-colour image resolution of ~30 nm

    SONAR Images Denoising

    Get PDF
    International audienc
    corecore