186 research outputs found

    Adaptive smoothness constraint image multilevel fuzzy enhancement algorithm

    Get PDF
    For the problems of poor enhancement effect and long time consuming of the traditional algorithm, an adaptive smoothness constraint image multilevel fuzzy enhancement algorithm based on secondary color-to-grayscale conversion is proposed. By using fuzzy set theory and generalized fuzzy set theory, a new linear generalized fuzzy operator transformation is carried out to obtain a new linear generalized fuzzy operator. By using linear generalized membership transformation and inverse transformation, secondary color-to-grayscale conversion of adaptive smoothness constraint image is performed. Combined with generalized fuzzy operator, the region contrast fuzzy enhancement of adaptive smoothness constraint image is realized, and image multilevel fuzzy enhancement is realized. Experimental results show that the fuzzy degree of the image is reduced by the improved algorithm, and the clarity of the adaptive smoothness constraint image is improved effectively. The time consuming is short, and it has some advantages

    On Box-Cox Transformation for Image Normality and Pattern Classification

    Full text link
    A unique member of the power transformation family is known as the Box-Cox transformation. The latter can be seen as a mathematical operation that leads to finding the optimum lambda ({\lambda}) value that maximizes the log-likelihood function to transform a data to a normal distribution and to reduce heteroscedasticity. In data analytics, a normality assumption underlies a variety of statistical test models. This technique, however, is best known in statistical analysis to handle one-dimensional data. Herein, this paper revolves around the utility of such a tool as a pre-processing step to transform two-dimensional data, namely, digital images and to study its effect. Moreover, to reduce time complexity, it suffices to estimate the parameter lambda in real-time for large two-dimensional matrices by merely considering their probability density function as a statistical inference of the underlying data distribution. We compare the effect of this light-weight Box-Cox transformation with well-established state-of-the-art low light image enhancement techniques. We also demonstrate the effectiveness of our approach through several test-bed data sets for generic improvement of visual appearance of images and for ameliorating the performance of a colour pattern classification algorithm as an example application. Results with and without the proposed approach, are compared using the AlexNet (transfer deep learning) pretrained model. To the best of our knowledge, this is the first time that the Box-Cox transformation is extended to digital images by exploiting histogram transformation.Comment: The paper has 4 Tables and 6 Figure

    Image Fuzzy Enhancement Based on Self-Adaptive Bee Colony Algorithm

    Get PDF
    In the image acquisition or transmission, the image may be damaged and distorted due to various reasons; therefore, in order to satisfy people’s visual effects, these images with degrading quality must be processed to meet practical needs. Integrating artificial bee colony algorithm and fuzzy set, this paper introduces fuzzy entropy into the self-adaptive fuzzy enhancement of image so as to realize the self-adaptive parameter selection. In the meanwhile, based on the exponential properties of information increase, it proposes a new definition of fuzzy entropy and uses artificial bee colony algorithm to realize the self-adaptive contrast enhancement under the maximum entropy criterion. The experimental result shows that the method proposed in this paper can increase the dynamic range compression of the image, enhance the visual effects of the image, enhance the image details, have some color fidelity capacity and effectively overcome the deficiencies of traditional image enhancement methods

    Visibility recovery on images acquired in attenuating media. Application to underwater, fog, and mammographic imaging

    Get PDF
    136 p.When acquired in attenuating media, digital images of ten suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasan tness for the user. In these cases, mathematical image processing reveals it self as an ideal tool to recover some of the information lost during the degradation process. In this dissertation,we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fogremoval and mammographic image processing. In the case of digital mammograms,X-ray beams traverse human tissue, and electronic detectorscapture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces low contraste dimages in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility.For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges,in this dissertation we develop new methodologies that rely on: a)physical models of the observed degradation, and b) the calculus of variations.Equipped with this powerful machinery, we design novel theoreticaland computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energie sare composed of different integral terms that are simultaneous lyminimized by means of efficient numerical schemes, producing a clean,visually-pleasant and use ful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validateour methods, confirming that the developed techniques out perform other existing approaches in the literature

    A novel image enhancement method for mammogram images

    Get PDF
    Breast cancer has been reported by American Cancer Society as the second leading cause of death among all the cancers of women. It is also reported that the early detection of breast cancer can improve survival rate by allowing a wider range of treatment options. Mammography is believed to be an effective tool to help radiologists to detect the malignant breast cancer at the early stage. Image enhancement techniques can improve the quality of mammogram images with enhancing the details of key features, like the shape of microcalcifications. This thesis proposed a novel method to enhance mammogram images. The proposed method uses a three level Laplacian Pyramid (LP) scheme that applies the Squeeze Box Filter (SBF) instead of conventional low pass filtering. A previously proposed nonlinear local enhancement technique is applied to the difference image produced in the Laplacian Pyramid to contrast enhance the structural details of mammogram images. The enhanced mammogram image is reconstructed by adding all the enhanced difference images to the origianl SBF filtered image. Experimentation and quantitative results reported in this thesis provide empirical evidence on the robustness of the proposed image enhancement method on mammographic images

    A generalized gamma correction algorithm based on the SLIP model

    Get PDF

    Image Enhancement for Scanned Historical Documents in the Presence of Multiple Degradations

    Get PDF
    Historical documents are treasured sources of information but typically suffer from problems with quality and degradation. Scanned images of historical documents suffer from difficulties due to paper quality and poor image capture, producing images with low contrast, smeared ink, bleed-through and uneven illumination. This PhD thesis proposes a novel adaptative histogram matching method to remove these artefacts from scanned images of historical documents. The adaptive histogram matching is modelled to create an ideal histogram by dividing the histogram using its Otsu level and applying Gaussian distributions to each segment with iterative output refinement applied to individual images. The pre-processing techniques of contrast stretching, wiener filtering, and bilateral filtering are used before the proposed adaptive histogram matching approach to maximise the dynamic range and reduce noise. The goal is to better represent document images and improve readability and the source images for Optical Character Recognition (OCR). Unlike other enhancement methods designed for single artefacts, the proposed method enhances multiple (low-contrast, smeared-ink, bleed-through and uneven illumination). In addition to developing an algorithm for historical document enhancement, the research also contributes a new dataset of scanned historical newspapers (an annotated subset of the Europeana Newspaper - ENP – dataset) where the enhancement technique is tested, which can also be used for further research. Experimental results show that the proposed method significantly reduces background noise and improves image quality on multiple artefacts compared to other enhancement methods. Several performance criteria are utilised to evaluate the proposed method’s efficiency. These include Signal to Noise Ratio (SNR), Mean opinion score (MOS), and visual document image quality assessment (VDIQA) metric called Visual Document Image Quality Assessment Metric (VDQAM). Additional assessment criteria to measure post-processing binarization quality are also discussed with enhanced results based on the Peak signal-to-noise ratio (PSNR), negative rate metric (NRM) and F-measure.Keywords: Image Enhancement, Historical Documents, OCR, Digitisation, Adaptive histogram matchin

    Visibility Recovery on Images Acquired in Attenuating Media. Application to Underwater, Fog, and Mammographic Imaging

    Get PDF
    When acquired in attenuating media, digital images often suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasantness for the user. In these cases, mathematical image processing reveals itself as an ideal tool to recover some of the information lost during the degradation process. In this dissertation, we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fog removal and mammographic image processing. In the case of digital mammograms, X-ray beams traverse human tissue, and electronic detectors capture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces lowcontrasted images in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility. For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges, in this dissertation we develop new methodologies that rely on: a) physical models of the observed degradation, and b) the calculus of variations. Equipped with this powerful machinery, we design novel theoretical and computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energies are composed of different integral terms that are simultaneously minimized by means of efficient numerical schemes, producing a clean, visually-pleasant and useful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validate our methods, confirming that the developed techniques outperform other existing approaches in the literature

    Textural Difference Enhancement based on Image Component Analysis

    Get PDF
    In this thesis, we propose a novel image enhancement method to magnify the textural differences in the images with respect to human visual characteristics. The method is intended to be a preprocessing step to improve the performance of the texture-based image segmentation algorithms. We propose to calculate the six Tamura's texture features (coarseness, contrast, directionality, line-likeness, regularity and roughness) in novel measurements. Each feature follows its original understanding of the certain texture characteristic, but is measured by some local low-level features, e.g., direction of the local edges, dynamic range of the local pixel intensities, kurtosis and skewness of the local image histogram. A discriminant texture feature selection method based on principal component analysis (PCA) is then proposed to find the most representative characteristics in describing textual differences in the image. We decompose the image into pairwise components representing the texture characteristics strongly and weakly, respectively. A set of wavelet-based soft thresholding methods are proposed as the dictionaries of morphological component analysis (MCA) to sparsely highlight the characteristics strongly and weakly from the image. The wavelet-based thresholding methods are proposed in pair, therefore each of the resulted pairwise components can exhibit one certain characteristic either strongly or weakly. We propose various wavelet-based manipulation methods to enhance the components separately. For each component representing a certain texture characteristic, a non-linear function is proposed to manipulate the wavelet coefficients of the component so that the component is enhanced with the corresponding characteristic accentuated independently while having little effect on other characteristics. Furthermore, the above three methods are combined into a uniform framework of image enhancement. Firstly, the texture characteristics differentiating different textures in the image are found. Secondly, the image is decomposed into components exhibiting these texture characteristics respectively. Thirdly, each component is manipulated to accentuate the corresponding texture characteristics exhibited there. After re-combining these manipulated components, the image is enhanced with the textural differences magnified with respect to the selected texture characteristics. The proposed textural differences enhancement method is used prior to both grayscale and colour image segmentation algorithms. The convincing results of improving the performance of different segmentation algorithms prove the potential of the proposed textural difference enhancement method
    • …
    corecore