37 research outputs found

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    An Efficient Image Denoising Approach for the Recovery of Impulse Noise

    Full text link
    Image noise is one of the key issues in image processing applications today. The noise will affect the quality of the image and thus degrades the actual information of the image. Visual quality is the prerequisite for many imagery applications such as remote sensing. In recent years, the significance of noise assessment and the recovery of noisy images are increasing. The impulse noise is characterized by replacing a portion of an image's pixel values with random values Such noise can be introduced due to transmission errors. Accordingly, this paper focuses on the effect of visual quality of the image due to impulse noise during the transmission of images. In this paper, a hybrid statistical noise suppression technique has been developed for improving the quality of the impulse noisy color images. We further proved the performance of the proposed image enhancement scheme using the advanced performance metrics

    An Efficient Image Denoising Approach for the Recovery of Impulse Noise

    Get PDF
    Image noise is one of the key issues in image processing applications today. The noise will affect the quality of the image and thus degrades the actual information of the image. Visual quality is the prerequisite for many imagery applications such as remote sensing. In recent years, the significance of noise assessment and the recovery of noisy images are increasing. The impulse noise is characterized by replacing a portion of an image’s pixel values with random values Such noise can be introduced due to transmission errors. Accordingly, this paper focuses on the effect of visual quality of the image due to impulse noise during the transmission of images. In this paper, a hybrid statistical noise suppression technique has been developed for improving the quality of the impulse noisy color images. We further proved the performance of the proposed image enhancement scheme using the advanced performance metrics

    Underwater image restoration: super-resolution and deblurring via sparse representation and denoising by means of marine snow removal

    Get PDF
    Underwater imaging has been widely used as a tool in many fields, however, a major issue is the quality of the resulting images/videos. Due to the light's interaction with water and its constituents, the acquired underwater images/videos often suffer from a significant amount of scatter (blur, haze) and noise. In the light of these issues, this thesis considers problems of low-resolution, blurred and noisy underwater images and proposes several approaches to improve the quality of such images/video frames. Quantitative and qualitative experiments validate the success of proposed algorithms

    Denoising single images by feature ensemble revisited

    Full text link
    Image denoising is still a challenging issue in many computer vision sub-domains. Recent studies show that significant improvements are made possible in a supervised setting. However, few challenges, such as spatial fidelity and cartoon-like smoothing remain unresolved or decisively overlooked. Our study proposes a simple yet efficient architecture for the denoising problem that addresses the aforementioned issues. The proposed architecture revisits the concept of modular concatenation instead of long and deeper cascaded connections, to recover a cleaner approximation of the given image. We find that different modules can capture versatile representations, and concatenated representation creates a richer subspace for low-level image restoration. The proposed architecture's number of parameters remains smaller than the number for most of the previous networks and still achieves significant improvements over the current state-of-the-art networks

    Fast Fuzzy C-Means Algorithm Incorporating Convex Combination of Bilateral Filter with Contrast Limited Adaptive Histogram Equalization

    Get PDF
    Fast Generalized Fuzzy c-means clustering algorithm (FGFCM) and its variants are effective methods for image clustering. Even though the incorporation of local spatial information to the objective function reduces their sensitivity to noise to some extent, they are still lack behind in suppressing the effect of noise and outliers on the edges and tiny areas of input image. This article proposes an algorithm to mitigate the disadvantage of FGFCM and its variants and enhances the performance of clustering

    Discrimination Ability Analysis on Texture Features for Automatic Noise Reduction in Brain MR Images

    Get PDF
    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. Most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. This paper attempts to systematically investigate significant attributes from popular image features and textures to facilitate subsequent automation process. In our approach, a total number of 39 image attributes are considered that are based on three categories: 1) Image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Tamura texture features. To obtain the ranking of discrimination in these texture features, a T-test is applied to each individual image features computed in every image based on noise levels, intensity distributions, and anatomical geometries. Preliminary results indicated that the order of significance in the texture features approximately varies in noise, slice, and normality. For distinguishing between noise levels, the features of contrast, standard deviation, angular second moment, and entropy from the GLCM class performed best. For distinguishing between slice positions, the features of mean and variance from the basic statistics class and the coarseness feature from the Tamuraclass outperformed other features

    水中イメージングシステムのための画質改善に関する研究

    Get PDF
    Underwater survey systems have numerous scientific or industrial applications in the fields of geology, biology, mining, and archeology. These application fields involve various tasks such as ecological studies, environmental damage assessment, and ancient prospection. During two decades, underwater imaging systems are mainly equipped by Underwater Vehicles (UV) for surveying in water or ocean. Challenges associated with obtaining visibility of objects have been difficult to overcome due to the physical properties of the medium. In the last two decades, sonar is usually used for the detection and recognition of targets in the ocean or underwater environment. However, because of the low quality of images by sonar imaging, optical vision sensors are then used instead of it for short range identification. Optical imaging provides short-range, high-resolution visual information of the ocean floor. However, due to the light transmission’s physical properties in the water medium, the optical imaged underwater images are usually performance as poor visibility. Light is highly attenuated when it travels in the ocean. Consequence, the imaged scenes result as poorly contrasted and hazy-like obstructions. The underwater imaging processing techniques are important to improve the quality of underwater images. As mentioned before, underwater images have poor visibility because of the medium scattering and light distortion. In contrast to common photographs, underwater optical images suffer from poor visibility owing to the medium, which causes scattering, color distortion, and absorption. Large suspended particles cause scattering similar to the scattering of light in fog or turbid water that contain many suspended particles. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient in the underwater environments are dominated by a bluish tone, because higher wavelengths are attenuated more quickly. Absorption of light in water substantially reduces its intensity. The random attenuation of light causes a hazy appearance as the light backscattered by water along the line of sight considerably degrades image contrast. Especially, objects at a distance of more than 10 meters from the observation point are almost unreadable because colors are faded as characteristic wavelengths, which are filtered according to the distance traveled by light in water. So, traditional image processing methods are not suitable for processing them well. This thesis proposes strategies and solutions to tackle the above mentioned problems of underwater survey systems. In this thesis, we contribute image pre-processing, denoising, dehazing, inhomogeneities correction, color correction and fusion technologies for underwater image quality improvement. The main content of this thesis is as follows. First, comprehensive reviews of the current and most prominent underwater imaging systems are provided in Chapter 1. A main features and performance based classification criterion for the existing systems is presented. After that, by analyzing the challenges of the underwater imaging systems, a hardware based approach and non-hardware based approach is introduced. In this thesis, we are concerned about the image processing based technologies, which are one of the non-hardware approaches, and take most recent methods to process the low quality underwater images. As the different sonar imaging systems applied in much equipment, such as side-scan sonar, multi-beam sonar. The different sonar acquires different images with different characteristics. Side-scan sonar acquires high quality imagery of the seafloor with very high spatial resolution but poor locational accuracy. On the contrast, multi-beam sonar obtains high precision position and underwater depth in seafloor points. In order to fully utilize all information of these two types of sonars, it is necessary to fuse the two kinds of sonar data in Chapter 2. Considering the sonar image forming principle, for the low frequency curvelet coefficients, we use the maximum local energy method to calculate the energy of two sonar images. For the high frequency curvelet coefficients, we take absolute maximum method as a measurement. The main attributes are: firstly, the multi-resolution analysis method is well adapted the cured-singularities and point-singularities. It is useful for sonar intensity image enhancement. Secondly, maximum local energy is well performing the intensity sonar images, which can achieve perfect fusion result [42]. In Chapter 3, as analyzed the underwater laser imaging system, a Bayesian Contourlet Estimator of Bessel K Form (BCE-BKF) based denoising algorithm is proposed. We take the BCE-BKF probability density function (PDF) to model neighborhood of contourlet coefficients. After that, according to the proposed PDF model, we design a maximum a posteriori (MAP) estimator, which relies on a Bayesian statistics representation of the contourlet coefficients of noisy images. The denoised laser images have better contrast than the others. There are three obvious virtues of the proposed method. Firstly, contourlet transform decomposition prior to curvelet transform and wavelet transform by using ellipse sampling grid. Secondly, BCE-BKF model is more effective in presentation of the noisy image contourlet coefficients. Thirdly, the BCE-BKF model takes full account of the correlation between coefficients [107]. In Chapter 4, we describe a novel method to enhance underwater images by dehazing. In underwater optical imaging, absorption, scattering, and color distortion are three major issues in underwater optical imaging. Light rays traveling through water are scattered and absorbed according to their wavelength. Scattering is caused by large suspended particles that degrade optical images captured underwater. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient underwater environments are dominated by a bluish tone. Our key contribution is to propose a fast image and video dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artificial lighting source into consideration [108]. In Chapter 5, we describe a novel method of enhancing underwater optical images or videos using guided multilayer filter and wavelength compensation. In certain circumstances, we need to immediately monitor the underwater environment by disaster recovery support robots or other underwater survey systems. However, due to the inherent optical properties and underwater complex environment, the captured images or videos are distorted seriously. Our key contributions proposed include a novel depth and wavelength based underwater imaging model to compensate for the attenuation discrepancy along the propagation path and a fast guided multilayer filtering enhancing algorithm. The enhanced images are characterized by a reduced noised level, better exposure of the dark regions, and improved global contrast where the finest details and edges are enhanced significantly [109]. The performance of the proposed approaches and the benefits are concluded in Chapter 6. Comprehensive experiments and extensive comparison with the existing related techniques demonstrate the accuracy and effect of our proposed methods.九州工業大学博士学位論文 学位記番号:工博甲第367号 学位授与年月日:平成26年3月25日CHAPTER 1 INTRODUCTION|CHAPTER 2 MULTI-SOURCE IMAGES FUSION|CHAPTER 3 LASER IMAGES DENOISING|CHAPTER 4 OPTICAL IMAGE DEHAZING|CHAPTER 5 SHALLOW WATER DE-SCATTERING|CHAPTER 6 CONCLUSIONS九州工業大学平成25年

    Q-switched fiber laser based on CdS quantum dots as a saturable absorber

    Get PDF
    In this work, a Q-switched fiber laser is demonstrated using quantum dots (QDs) cadmium sulfide (CdS) as a saturable absorber (SA) in an erbium-doped fiber laser (EDFL) system. The QD CdS is synthesized via the microwave hydrothermal assisted method and embedded into polyvinyl alcohol (PVA). The QD CdS/PVA matrix film is sandwiched in between two fiber ferrules by a fiber adapter. The generation of Q-switched fiber laser having a repetition rate, a pulse width, and a peak-topeak pulse duration of 75.19 kHz, 1.27 μs, and 13.32 μs, respectively. The maximum output power of 3.82 mW and maximum pulse energy of 50.8 nJ are obtained at the maximum pump power of 145.9 mW. The proposed design may add to the alternative material of Q-switched fiber laser generation, which gives a high stability output performance by using quantum dots material as a saturable absorber
    corecore