1,727 research outputs found

    Efficiency of texture image enhancement by DCT-based filtering

    Get PDF
    International audienceTextures or high-detailed structures as well as image object shapes contain information that is widely exploited in pattern recognition and image classification. Noise can deteriorate these features and has to be removed. In this paper, we consider the influence of textural properties on efficiency of image enhancement by noise suppression for the posterior treatment. Among possible variants of denoising, filters based on discrete cosine transform known to be effective in removing additive white Gaussian noise are considered. It is shown that noise removal in texture images using the considered techniques can distort fine texture details. To detect such situations and to avoid texture degradation due to filtering, filtering efficiency predictors, including neural network based predictor, applicable to a wide class of images are proposed. These predictors use simple statistical parameters to estimate performance of the considered filters. Image enhancement is analysed in terms of both standard criteria and metrics of image visual quality for various scenarios of texture roughness and noise characteristics. The discrete cosine transform based filters are compared to several counterparts. Problems of noise removal in texture images are demonstrated for all of them. A special case of spatially correlated noise is considered as well. Potential efficiency of filtering is analysed for both studied noise models. It is shown that studied filters are close to the potential limits

    Enhancement Of Medical Image Compression Algorithm In Noisy WLANS Transmission

    Get PDF
    Advances in telemedicine technology enable rapid medical diagnoses with visualization and quantitative assessment by medical practitioners.In healthcare and hospital networks,medical data exchange-based wireless local area network (WLAN) transceivers remain challenging because of their growing data size,real-time contact with compressed images,and range of bandwidths requiring transmission support.Prior to transmission,medical data are compressed to minimize transmission bandwidth and save transmitting power.Researchers address many challenges in improving performance of compression approaches.Such challenges include energy compaction, computational complexity,high entropy value,drive low compression ratio (CR) and high computational complexity in real-time implementation.Thus,a new approach called Enhanced Independent Component Analysis (EICA) for medical image compression has been developed to boost compression techniques;which transform the image data by block-based Independent Component Analysis (ICA).The proposed method uses Fast Independent Component Analysis (FastICA) algorithm followed by developed quantization architecture based zero quantized coefficients percentage (ZQCP) prediction model using artificial neural network. For image reconstruction,decoding steps based the developed quantization architecture are examined.The EICA is particularly useful where the size of the transmitted data needs to be reduced to minimize the image transmission time.For data compression with suitable and effective performance,enhanced independent components analysis (EICA) is proposed as an algorithm for compression and decompression of medical data.A comparative analysis is performed based on existing data compression techniques:discrete cosine transform (DCT), set partitioning in hierarchical trees (SPIHT),and Joint Photographic Experts Group (JPEG 2000).Three main modules,namely,compression segment (CS),transceiver segment (TRS),and outcome segment (OTS) modules,are developed to realize a fully computerized simulation tool for medical data compression with suitable and effective performance.To compress medical data using algorithms,CS module involves four different approaches which are DCT, SPIHT,JPEG 2000 and EICA.TRS module is processed by low-cost WLANs with low-bandwidth transmission.Finally,OTS is used for data decompression and visualization result.In terms of compression module,results show the benefits of applying EICA in medical data compression and transmission.While for system design,the developed system displays favorable outcomes in compressing and transmitting medical data.In conclusion,all three modules (CS,TRS,and OTS) are integrated to yield a computerized prototype named as Medical Data Simulation System(Medata-SIM) computerized system that includes medical data compression and transceiver for visualization to aid medical practitioners in carrying out rapid diagnoses

    Pigment Melanin: Pattern for Iris Recognition

    Full text link
    Recognition of iris based on Visible Light (VL) imaging is a difficult problem because of the light reflection from the cornea. Nonetheless, pigment melanin provides a rich feature source in VL, unavailable in Near-Infrared (NIR) imaging. This is due to biological spectroscopy of eumelanin, a chemical not stimulated in NIR. In this case, a plausible solution to observe such patterns may be provided by an adaptive procedure using a variational technique on the image histogram. To describe the patterns, a shape analysis method is used to derive feature-code for each subject. An important question is how much the melanin patterns, extracted from VL, are independent of iris texture in NIR. With this question in mind, the present investigation proposes fusion of features extracted from NIR and VL to boost the recognition performance. We have collected our own database (UTIRIS) consisting of both NIR and VL images of 158 eyes of 79 individuals. This investigation demonstrates that the proposed algorithm is highly sensitive to the patterns of cromophores and improves the iris recognition rate.Comment: To be Published on Special Issue on Biometrics, IEEE Transaction on Instruments and Measurements, Volume 59, Issue number 4, April 201

    Sensor Pattern Noise Estimation Based on Improved Locally Adaptive DCT Filtering and Weighted Averaging for Source Camera Identification and Verification

    Get PDF
    Photo Response Non-Uniformity (PRNU) noise is a sensor pattern noise characterizing the imaging device. It has been broadly used in the literature for source camera identification and image authentication. The abundant information that the sensor pattern noise carries in terms of the frequency content makes it unique, and hence suitable for identifying the source camera and detecting image forgeries. However, the PRNU extraction process is inevitably faced with the presence of image-dependent information as well as other non-unique noise components. To reduce such undesirable effects, researchers have developed a number of techniques in different stages of the process, i.e., the filtering stage, the estimation stage, and the post-estimation stage. In this paper, we present a new PRNU-based source camera identification and verification system and propose enhancements in different stages. First, an improved version of the Locally Adaptive Discrete Cosine Transform (LADCT) filter is proposed in the filtering stage. In the estimation stage, a new Weighted Averaging (WA) technique is presented. The post-estimation stage consists of concatenating the PRNUs estimated from color planes in order to exploit the presence of physical PRNU components in different channels. Experimental results on two image datasets acquired by various camera devices have shown a significant gain obtained with the proposed enhancements in each stage as well as the superiority of the overall system over related state-of-the-art systems

    Techniques for enhancing digital images

    Get PDF
    The images obtain from either research studies or optical instruments are often corrupted with noise. Image denoising involves the manipulation of image data to produce a visually high quality image. This thesis reviews the existing denoising algorithms and the filtering approaches available for enhancing images and/or data transmission. Spatial-domain and Transform-domain digital image filtering algorithms have been used in the past to suppress different noise models. The different noise models can be either additive or multiplicative. Selection of the denoising algorithm is application dependent. It is necessary to have knowledge about the noise present in the image so as to select the appropriated denoising algorithm. Noise models may include Gaussian noise, Salt and Pepper noise, Speckle noise and Brownian noise. The Wavelet Transform is similar to the Fourier transform with a completely different merit function. The main difference between Wavelet transform and Fourier transform is that, in the Wavelet Transform, Wavelets are localized in both time and frequency. In the standard Fourier Transform, Wavelets are only localized in frequency. Wavelet analysis consists of breaking up the signal into shifted and scales versions of the original (or mother) Wavelet. The Wiener Filter (mean squared estimation error) finds implementations as a LMS filter (least mean squares), RLS filter (recursive least squares), or Kalman filter. Quantitative measure (metrics) of the comparison of the denoising algorithms is provided by calculating the Peak Signal to Noise Ratio (PSNR), the Mean Square Error (MSE) value and the Mean Absolute Error (MAE) evaluation factors. A combination of metrics including the PSNR, MSE, and MAE are often required to clearly assess the model performance

    A total variation regularization based super-resolution reconstruction algorithm for digital video

    Get PDF
    Super-resolution (SR) reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV) regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.£.published_or_final_versio

    MDLatLRR: A novel decomposition method for infrared and visible image fusion

    Get PDF
    Image decomposition is crucial for many image processing tasks, as it allows to extract salient features from source images. A good image decomposition method could lead to a better performance, especially in image fusion tasks. We propose a multi-level image decomposition method based on latent low-rank representation(LatLRR), which is called MDLatLRR. This decomposition method is applicable to many image processing fields. In this paper, we focus on the image fusion task. We develop a novel image fusion framework based on MDLatLRR, which is used to decompose source images into detail parts(salient features) and base parts. A nuclear-norm based fusion strategy is used to fuse the detail parts, and the base parts are fused by an averaging strategy. Compared with other state-of-the-art fusion methods, the proposed algorithm exhibits better fusion performance in both subjective and objective evaluation.Comment: IEEE Trans. Image Processing 2020, 14 pages, 17 figures, 3 table
    corecore