147 research outputs found

    Comparative Analyses of Multilevel and Geometric Image Fusion Techniques

    Get PDF
    Image fusion which is a technique to provide the resultant and complete information when two images are combined at a single image. It is widely used application mainly for medical and multifocus imaging. Here in this paper we have proposed combination of multilevel image fusion and geometric based fusion technique. Initially fusion is carried out by multilevel image fusion technique, which includes either wavelet transform or curvelet transform, and at second level fusion is carried out by spatial or laplacian pyramid transform. Further geometric fusion technique will be applied by using the technique of Affine transform. Finally the performance will be evaluated by different quality metrics, which are used to prove the curvelet transform result better performance than wavelet transform in multilevel fusion, and affine transform will produce more resultant than both wavelet and curvelet transform. The proposed system is very unique technique in which this application will be more useful for medical, and satellite imaging. DOI: 10.17762/ijritcc2321-8169.160415

    Image Fusion: A Review

    Get PDF
    At the present time, image fusion is considered as one of the types of integrated technology information, it has played a significant role in several domains and production of high-quality images. The goal of image fusion is blending information from several images, also it is fusing and keeping all the significant visual information that exists in the original images. Image fusion is one of the methods of field image processing. Image fusion is the process of merging information from a set of images to consist one image that is more informative and suitable for human and machine perception. It increases and enhances the quality of images for visual interpretation in different applications. This paper offers the outline of image fusion methods, the modern tendencies of image fusion and image fusion applications. Image fusion can be performed in the spatial and frequency domains. In the spatial domain is applied directly on the original images by merging the pixel values of the two or more images for purpose forming a fused image, while in the frequency domain the original images will decompose into multilevel coefficient and synthesized by using inverse transform to compose the fused image. Also, this paper presents a various techniques for image fusion in spatial and frequency domains such as averaging, minimum/maximum, HIS, PCA and transform-based techniques, etc.. Different quality measures have been explained in this paper to perform a comparison of these methods

    Advances in Multi-Sensor Data Fusion: Algorithms and Applications

    Get PDF
    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of “algorithm fusion” methods; (3) Establishment of an automatic quality assessment scheme

    Image Fusion Methods: A Survey

    Get PDF

    Face detection in profile views using fast discrete curvelet transform (FDCT) and support vector machine (SVM)

    Get PDF
    Human face detection is an indispensable component in face processing applications, including automatic face recognition, security surveillance, facial expression recognition, and the like. This paper presents a profile face detection algorithm based on curvelet features, as curvelet transform offers good directional representation and can capture edge information in human face from different angles. First, a simple skin color segmentation scheme based on HSV (Hue - Saturation - Value) and YCgCr (luminance - green chrominance - red chrominance) color models is used to extract skin blocks. The segmentation scheme utilizes only the S and CgCr components, and is therefore luminance independent. Features extracted from three frequency bands from curvelet decomposition are used to detect face in each block. A support vector machine (SVM) classifier is trained for the classification task. In the performance test, the results showed that the proposed algorithm can detect profile faces in color images with good detection rate and low misdetection rate

    Medical Image Registration Using Artificial Neural Network

    Get PDF
    Image registration is the transformation of different sets of images into one coordinate system in order to align and overlay multiple images. Image registration is used in many fields such as medical imaging, remote sensing, and computer vision. It is very important in medical research, where multiple images are acquired from different sensors at various points in time. This allows doctors to monitor the effects of treatments on patients in a certain region of interest over time. In this thesis, artificial neural networks with curvelet keypoints are used to estimate the parameters of registration. Simulations show that the curvelet keypoints provide more accurate results than using the Discrete Cosine Transform (DCT) coefficients and Scale Invariant Feature Transform (SIFT) keypoints on rotation and scale parameter estimation

    Multiresolution Methods in Face Recognition

    Get PDF

    水中イメージングシステムのための画質改善に関する研究

    Get PDF
    Underwater survey systems have numerous scientific or industrial applications in the fields of geology, biology, mining, and archeology. These application fields involve various tasks such as ecological studies, environmental damage assessment, and ancient prospection. During two decades, underwater imaging systems are mainly equipped by Underwater Vehicles (UV) for surveying in water or ocean. Challenges associated with obtaining visibility of objects have been difficult to overcome due to the physical properties of the medium. In the last two decades, sonar is usually used for the detection and recognition of targets in the ocean or underwater environment. However, because of the low quality of images by sonar imaging, optical vision sensors are then used instead of it for short range identification. Optical imaging provides short-range, high-resolution visual information of the ocean floor. However, due to the light transmission’s physical properties in the water medium, the optical imaged underwater images are usually performance as poor visibility. Light is highly attenuated when it travels in the ocean. Consequence, the imaged scenes result as poorly contrasted and hazy-like obstructions. The underwater imaging processing techniques are important to improve the quality of underwater images. As mentioned before, underwater images have poor visibility because of the medium scattering and light distortion. In contrast to common photographs, underwater optical images suffer from poor visibility owing to the medium, which causes scattering, color distortion, and absorption. Large suspended particles cause scattering similar to the scattering of light in fog or turbid water that contain many suspended particles. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient in the underwater environments are dominated by a bluish tone, because higher wavelengths are attenuated more quickly. Absorption of light in water substantially reduces its intensity. The random attenuation of light causes a hazy appearance as the light backscattered by water along the line of sight considerably degrades image contrast. Especially, objects at a distance of more than 10 meters from the observation point are almost unreadable because colors are faded as characteristic wavelengths, which are filtered according to the distance traveled by light in water. So, traditional image processing methods are not suitable for processing them well. This thesis proposes strategies and solutions to tackle the above mentioned problems of underwater survey systems. In this thesis, we contribute image pre-processing, denoising, dehazing, inhomogeneities correction, color correction and fusion technologies for underwater image quality improvement. The main content of this thesis is as follows. First, comprehensive reviews of the current and most prominent underwater imaging systems are provided in Chapter 1. A main features and performance based classification criterion for the existing systems is presented. After that, by analyzing the challenges of the underwater imaging systems, a hardware based approach and non-hardware based approach is introduced. In this thesis, we are concerned about the image processing based technologies, which are one of the non-hardware approaches, and take most recent methods to process the low quality underwater images. As the different sonar imaging systems applied in much equipment, such as side-scan sonar, multi-beam sonar. The different sonar acquires different images with different characteristics. Side-scan sonar acquires high quality imagery of the seafloor with very high spatial resolution but poor locational accuracy. On the contrast, multi-beam sonar obtains high precision position and underwater depth in seafloor points. In order to fully utilize all information of these two types of sonars, it is necessary to fuse the two kinds of sonar data in Chapter 2. Considering the sonar image forming principle, for the low frequency curvelet coefficients, we use the maximum local energy method to calculate the energy of two sonar images. For the high frequency curvelet coefficients, we take absolute maximum method as a measurement. The main attributes are: firstly, the multi-resolution analysis method is well adapted the cured-singularities and point-singularities. It is useful for sonar intensity image enhancement. Secondly, maximum local energy is well performing the intensity sonar images, which can achieve perfect fusion result [42]. In Chapter 3, as analyzed the underwater laser imaging system, a Bayesian Contourlet Estimator of Bessel K Form (BCE-BKF) based denoising algorithm is proposed. We take the BCE-BKF probability density function (PDF) to model neighborhood of contourlet coefficients. After that, according to the proposed PDF model, we design a maximum a posteriori (MAP) estimator, which relies on a Bayesian statistics representation of the contourlet coefficients of noisy images. The denoised laser images have better contrast than the others. There are three obvious virtues of the proposed method. Firstly, contourlet transform decomposition prior to curvelet transform and wavelet transform by using ellipse sampling grid. Secondly, BCE-BKF model is more effective in presentation of the noisy image contourlet coefficients. Thirdly, the BCE-BKF model takes full account of the correlation between coefficients [107]. In Chapter 4, we describe a novel method to enhance underwater images by dehazing. In underwater optical imaging, absorption, scattering, and color distortion are three major issues in underwater optical imaging. Light rays traveling through water are scattered and absorbed according to their wavelength. Scattering is caused by large suspended particles that degrade optical images captured underwater. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient underwater environments are dominated by a bluish tone. Our key contribution is to propose a fast image and video dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artificial lighting source into consideration [108]. In Chapter 5, we describe a novel method of enhancing underwater optical images or videos using guided multilayer filter and wavelength compensation. In certain circumstances, we need to immediately monitor the underwater environment by disaster recovery support robots or other underwater survey systems. However, due to the inherent optical properties and underwater complex environment, the captured images or videos are distorted seriously. Our key contributions proposed include a novel depth and wavelength based underwater imaging model to compensate for the attenuation discrepancy along the propagation path and a fast guided multilayer filtering enhancing algorithm. The enhanced images are characterized by a reduced noised level, better exposure of the dark regions, and improved global contrast where the finest details and edges are enhanced significantly [109]. The performance of the proposed approaches and the benefits are concluded in Chapter 6. Comprehensive experiments and extensive comparison with the existing related techniques demonstrate the accuracy and effect of our proposed methods.九州工業大学博士学位論文 学位記番号:工博甲第367号 学位授与年月日:平成26年3月25日CHAPTER 1 INTRODUCTION|CHAPTER 2 MULTI-SOURCE IMAGES FUSION|CHAPTER 3 LASER IMAGES DENOISING|CHAPTER 4 OPTICAL IMAGE DEHAZING|CHAPTER 5 SHALLOW WATER DE-SCATTERING|CHAPTER 6 CONCLUSIONS九州工業大学平成25年
    corecore