307 research outputs found

    A Novel Technique for Fundus Image Contrast Enhancement

    Get PDF
    ABSTRACT Digital fundus Image analysis plays a vital role in computer aided diagnosis of several disorders. Image acquired with fundus camera often have low grey level contrast and dynamic range .We present a new method for fundus image contrast enhancement using Discrete Wavelet Transform (DWT) and Singular Value Decomposition(SVD).The performance of this technique is better than conventional and state of the art-techniques. With the proposed method the given Fundus Image is decomposed into four frequency sub band images and Singular Value Decomposition applied on Low-Low subband Image, which determines the intensity information. Finally Image reconstructed using modified Low-Low subband coefficients and three high frequency sub band coefficients. The qualitative and quantitative performance of proposed technique i

    Bounded PCA based Multi Sensor Image Fusion Employing Curvelet Transform Coefficients

    Get PDF
    The fusion of thermal and visible images acts as an important device for target detection. The quality of the spectral content of the fused image improves with wavelet-based image fusion. However, compared to PCA-based fusion, most wavelet-based methods provide results with a lower spatial resolution. The outcome gets better when the two approaches are combined, but they may still be refined. Compared to wavelets, the curvelet transforms more accurately depict the edges in the image. Enhancing the edges is a smart way to improve spatial resolution and the edges are crucial for interpreting the images. The fusion technique that utilizes curvelets enables the provision of additional data in both spectral and spatial areas concurrently. In this paper, we employ an amalgamation of Curvelet Transform and a Bounded PCA (CTBPCA) method to fuse thermal and visible images. To evidence the enhanced efficiency of our proposed technique, multiple evaluation metrics and comparisons with existing image merging methods are employed. Our approach outperforms others in both qualitative and quantitative analysis, except for runtime performance. Future Enhancement-The study will be based on using the fused image for target recognition. Future work should also focus on this method’s continued improvement and optimization for real-time video processing

    Multi-focus image fusion using maximum symmetric surround saliency detection

    Get PDF
    In digital photography, two or more objects of a scene cannot be focused at the same time. If we focus one object, we may lose information about other objects and vice versa. Multi-focus image fusion is a process of generating an all-in-focus image from several out-of-focus images. In this paper, we propose a new multi-focus image fusion method based on two-scale image decomposition and saliency detection using maximum symmetric surround. This method is very beneficial because the saliency map used in this method can highlight the saliency information present in the source images with well defined boundaries. A weight map construction method based on saliency information is developed in this paper. This weight map can identify the focus and defocus regions present in the image very well. So we implemented a new fusion algorithm based on weight map which integrate only focused region information into the fused image. Unlike multi-scale image fusion methods, in this method two-scale image decomposition is sufficient. So, it is computationally efficient. Proposed method is tested on several multi-focus image datasets and it is compared with traditional and recently proposed fusion methods using various fusion metrics. Results justify that our proposed method gives stable and promising performance when compared to that of the existing methods

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    Super Resolution of Wavelet-Encoded Images and Videos

    Get PDF
    In this dissertation, we address the multiframe super resolution reconstruction problem for wavelet-encoded images and videos. The goal of multiframe super resolution is to obtain one or more high resolution images by fusing a sequence of degraded or aliased low resolution images of the same scene. Since the low resolution images may be unaligned, a registration step is required before super resolution reconstruction. Therefore, we first explore in-band (i.e. in the wavelet-domain) image registration; then, investigate super resolution. Our motivation for analyzing the image registration and super resolution problems in the wavelet domain is the growing trend in wavelet-encoded imaging, and wavelet-encoding for image/video compression. Due to drawbacks of widely used discrete cosine transform in image and video compression, a considerable amount of literature is devoted to wavelet-based methods. However, since wavelets are shift-variant, existing methods cannot utilize wavelet subbands efficiently. In order to overcome this drawback, we establish and explore the direct relationship between the subbands under a translational shift, for image registration and super resolution. We then employ our devised in-band methodology, in a motion compensated video compression framework, to demonstrate the effective usage of wavelet subbands. Super resolution can also be used as a post-processing step in video compression in order to decrease the size of the video files to be compressed, with downsampling added as a pre-processing step. Therefore, we present a video compression scheme that utilizes super resolution to reconstruct the high frequency information lost during downsampling. In addition, super resolution is a crucial post-processing step for satellite imagery, due to the fact that it is hard to update imaging devices after a satellite is launched. Thus, we also demonstrate the usage of our devised methods in enhancing resolution of pansharpened multispectral images

    Image Compression Techniques: A Survey in Lossless and Lossy algorithms

    Get PDF
    The bandwidth of the communication networks has been increased continuously as results of technological advances. However, the introduction of new services and the expansion of the existing ones have resulted in even higher demand for the bandwidth. This explains the many efforts currently being invested in the area of data compression. The primary goal of these works is to develop techniques of coding information sources such as speech, image and video to reduce the number of bits required to represent a source without significantly degrading its quality. With the large increase in the generation of digital image data, there has been a correspondingly large increase in research activity in the field of image compression. The goal is to represent an image in the fewest number of bits without losing the essential information content within. Images carry three main type of information: redundant, irrelevant, and useful. Redundant information is the deterministic part of the information, which can be reproduced without loss from other information contained in the image. Irrelevant information is the part of information that has enormous details, which are beyond the limit of perceptual significance (i.e., psychovisual redundancy). Useful information, on the other hand, is the part of information, which is neither redundant nor irrelevant. Human usually observes decompressed images. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System. This paper provides a survey on various image compression techniques, their limitations, compression rates and highlights current research in medical image compression

    Multi-focus image fusion using maximum symmetric surround saliency detection

    Get PDF
    In digital photography, two or more objects of a scene cannot be focused at the same time. If we focus one object, we may lose information about other objects and vice versa. Multi-focus image fusion is a process of generating an all-in-focus image from several out-of-focus images. In this paper, we propose a new multi-focus image fusion method based on two-scale image decomposition and saliency detection using maximum symmetric surround. This method is very beneficial because the saliency map used in this method can highlight the saliency information present in the source images with well defined boundaries. A weight map construction method based on saliency information is developed in this paper. This weight map can identify the focus and defocus regions present in the image very well. So we implemented a new fusion algorithm based on weight map which integrate only focused region information into the fused image. Unlike multi-scale image fusion methods, in this method two-scale image decomposition is sufficient. So, it is computationally efficient. Proposed method is tested on several multi-focus image datasets and it is compared with traditional and recently proposed fusion methods using various fusion metrics. Results justify that our proposed method outperforms the existing methods

    Modulation Domain Image Processing

    Get PDF
    The classical Fourier transform is the cornerstone of traditional linearsignal and image processing. The discrete Fourier transform (DFT) and thefast Fourier transform (FFT) in particular led toprofound changes during the later decades of the last century in howwe analyze and process 1D and multi-dimensional signals.The Fourier transform represents a signal as an infinite superpositionof stationary sinusoids each of which has constant amplitude and constantfrequency. However, many important practical signals such as radar returnsand seismic waves are inherently nonstationary. Hence, more complextechniques such as the windowed Fourier transform and the wavelet transformwere invented to better capture nonstationary properties of these signals.In this dissertation, I studied an alternative nonstationary representationfor images, the 2D AM-FM model. In contrast to thestationary nature of the classical Fourier representation, the AM-FM modelrepresents an image as a finite sum of smoothly varying amplitudesand smoothly varying frequencies. The model has been applied successfullyin image processing applications such as image segmentation, texture analysis,and target tracking. However, these applications are limitedto \emph{analysis}, meaning that the computed AM and FM functionsare used as features for signal processing tasks such as classificationand recognition. For synthesis applications, few attempts have been madeto synthesize the original image from the AM and FM components. Nevertheless,these attempts were unstable and the synthesized results contained artifacts.The main reason is that the perfect reconstruction AM-FM image model waseither unavailable or unstable. Here, I constructed the first functionalperfect reconstruction AM-FM image transform that paves the way for AM-FMimage synthesis applications. The transform enables intuitive nonlinearimage filter designs in the modulation domain. I showed that these filtersprovide important advantages relative to traditional linear translation invariant filters.This dissertation addresses image processing operations in the nonlinearnonstationary modulation domain. In the modulation domain, an image is modeledas a sum of nonstationary amplitude modulation (AM) functions andnonstationary frequency modulation (FM) functions. I developeda theoretical framework for high fidelity signal and image modeling in themodulation domain, constructed an invertible multi-dimensional AM-FMtransform (xAMFM), and investigated practical signal processing applicationsof the transform. After developing the xAMFM, I investigated new imageprocessing operations that apply directly to the transformed AM and FMfunctions in the modulation domain. In addition, I introduced twoclasses of modulation domain image filters. These filters produceperceptually motivated signal processing results that are difficult orimpossible to obtain with traditional linear processing or spatial domainnonlinear approaches. Finally, I proposed three extensions of the AM-FMtransform and applied them in image analysis applications.The main original contributions of this dissertation include the following.- I proposed a perfect reconstruction FM algorithm. I used aleast-squares approach to recover the phase signal from itsgradient. In order to allow perfect reconstruction of the phase function, Ienforced an initial condition on the reconstructed phase. The perfectreconstruction FM algorithm plays a critical role in theoverall AM-FM transform.- I constructed a perfect reconstruction multi-dimensional filterbankby modifying the classical steerable pyramid. This modified filterbankensures a true multi-scale multi-orientation signal decomposition. Such adecomposition is required for a perceptually meaningful AM-FM imagerepresentation.- I rotated the partial Hilbert transform to alleviate ripplingartifacts in the computed AM and FM functions. This adjustment results inartifact free filtering results in the modulation domain.- I proposed the modulation domain image filtering framework. Iconstructed two classes of modulation domain filters. I showed that themodulation domain filters outperform traditional linear shiftinvariant (LSI) filters qualitatively and quantitatively in applicationssuch as selective orientation filtering, selective frequency filtering,and fundamental geometric image transformations.- I provided extensions of the AM-FM transform for image decompositionproblems. I illustrated that the AM-FM approach can successfullydecompose an image into coherent components such as textureand structural components.- I investigated the relationship between the two prominentAM-FM computational models, namely the partial Hilbert transformapproach (pHT) and the monogenic signal. The established relationshiphelps unify these two AM-FM algorithms.This dissertation lays a theoretical foundation for future nonlinearmodulation domain image processing applications. For the first time, onecan apply modulation domain filters to images to obtain predictableresults. The design of modulation domain filters is intuitive and simple,yet these filters produce superior results compared to those of pixeldomain LSI filters. Moreover, this dissertation opens up other research problems.For instance, classical image applications such as image segmentation andedge detection can be re-formulated in the modulation domain setting.Modulation domain based perceptual image and video quality assessment andimage compression are important future application areas for the fundamentalrepresentation results developed in this dissertation

    スペクトルの線形性を考慮したハイパースペクトラル画像のノイズ除去とアンミキシングに関する研究

    Get PDF
    This study aims to generalize color line to M-dimensional spectral line feature (M>3) and introduce methods for denoising and unmixing of hyperspectral images based on the spectral linearity.For denoising, we propose a local spectral component decomposition method based on the spectral line. We first calculate the spectral line of an M-channel image, then using the line, we decompose the image into three components: a single M-channel image and two gray-scale images. By virtue of the decomposition, the noise is concentrated on the two images, thus the algorithm needs to denoise only two grayscale images, regardless of the number of channels. For unmixing, we propose an algorithm that exploits the low-rank local abundance by applying the unclear norm to the abundance matrix for local regions of spatial and abundance domains. In optimization problem, the local abundance regularizer is collaborated with the L2, 1 norm and the total variation.北九州市立大

    Enhanced Augmented Reality Framework for Sports Entertainment Applications

    Get PDF
    Augmented Reality (AR) superimposes virtual information on real-world data, such as displaying useful information on videos/images of a scene. This dissertation presents an Enhanced AR (EAR) framework for displaying useful information on images of a sports game. The challenge in such applications is robust object detection and recognition. This is even more challenging when there is strong sunlight. We address the phenomenon where a captured image is degraded by strong sunlight. The developed framework consists of an image enhancement technique to improve the accuracy of subsequent player and face detection. The image enhancement is followed by player detection, face detection, recognition of players, and display of personal information of players. First, an algorithm based on Multi-Scale Retinex (MSR) is proposed for image enhancement. For the tasks of player and face detection, we use adaptive boosting algorithm with Haar-like features for both feature selection and classification. The player face recognition algorithm uses adaptive boosting with the LDA for feature selection and nearest neighbor classifier for classification. The framework can be deployed in any sports where a viewer captures images. Display of players-specific information enhances the end-user experience. Detailed experiments are performed on 2096 diverse images captured using a digital camera and smartphone. The images contain players in different poses, expressions, and illuminations. Player face recognition module requires players faces to be frontal or up to ?350 of pose variation. The work demonstrates the great potential of computer vision based approaches for future development of AR applications.COMSATS Institute of Information Technolog
    corecore