41 research outputs found

    Image watermarking based on the space/spatial-frequency analysis and Hermite functions expansion

    Get PDF
    International audienceAn image watermarking scheme that combines Hermite functions expansion and space/spatial-frequency analysis is proposed. In the first step, the Hermite functions expansion is employed to select busy regions for watermark embedding. In the second step, the space/spatial-frequency representation and Hermite functions expansion are combined to design the imperceptible watermark, using the host local frequency content. The Hermite expansion has been done by using the fast Hermite projection method. Recursive realization of Hermite functions significantly speeds up the algorithms for regions selection and watermark design. The watermark detection is performed within the space/spatial-frequency domain. The detection performance is increased due to the high information redundancy in that domain in comparison with the space or frequency domains, respectively. The performance of the proposed procedure has been tested experimentally for different watermark strengths, i.e., for different values of the peak signal-to-noise ratio (PSNR). The proposed approach provides high detection performance even for high PSNR values. It offers a good compromise between detection performance (including the robustness to a wide variety of common attacks) and imperceptibility

    Wavelet based Watermarking approach in the Compressive Sensing Scenario

    Full text link
    Due to the wide distribution and usage of digital media, an important issue is protection of the digital content. There is a number of algorithms and techniques developed for the digital watermarking.In this paper, the invisible image watermark procedure is considered. Watermark is created as a pseudo random sequence, embedded in the certain region of the image, obtained using Haar wavelet decomposition. Generally, the watermarking procedure should be robust to the various attacks-filtering, noise etc. Here we assume the Compressive sensing scenario as a new signal processing technique that may influence the robustness. The focus of this paper was the possibility of the watermark detection under Compressive Sensing attack with different number of available image coefficients. The quality of the reconstructed images has been evaluated using Peak Signal to Noise Ratio (PSNR).The theory is supported with experimental results

    The Applications of Discrete Wavelet Transform in Image Processing: A Review

    Get PDF
    This paper reviews the newly published works on applying waves to image processing depending on the analysis of multiple solutions. the wavelet transformation reviewed in detail including wavelet function, integrated wavelet transformation, discrete wavelet transformation, rapid wavelet transformation, DWT properties, and DWT advantages. After reviewing the basics of wavelet transformation theory, various applications of wavelet are reviewed and multi-solution analysis, including image compression, image reduction, image optimization, and image watermark. In addition, we present the concept and theory of quadruple waves for the future progress of wavelet transform applications and quadruple solubility applications. The aim of this paper is to provide a wide-ranging review of the survey found able on wavelet-based image processing applications approaches. It will be beneficial for scholars to execute effective image processing applications approaches

    Probabilistic modeling of wavelet coefficients for processing of image and video signals

    Get PDF
    Statistical estimation and detection techniques are widely used in signal processing including wavelet-based image and video processing. The probability density function (PDF) of the wavelet coefficients of image and video signals plays a key role in the development of techniques for such a processing. Due to the fixed number of parameters, the conventional PDFs for the estimators and detectors usually ignore higher-order moments. Consequently, estimators and detectors designed using such PDFs do not provide a satisfactory performance. This thesis is concerned with first developing a probabilistic model that is capable of incorporating an appropriate number of parameters that depend on higher-order moments of the wavelet coefficients. This model is then used as the prior to propose certain estimation and detection techniques for denoising and watermarking of image and video signals. Towards developing the probabilistic model, the Gauss-Hermite series expansion is chosen, since the wavelet coefficients have non-compact support and their empirical density function shows a resemblance to the standard Gaussian function. A modification is introduced in the series expansion so that only a finite number of terms can be used for modeling the wavelet coefficients with rendering the resulting PDF to become negative. The parameters of the resulting PDF, called the modified Gauss-Hermite (NIGH) PDF, are evaluated in terms of the higher-order sample-moments. It is shown that the MGH PDF fits the empirical density function better than the existing PDFs that use a limited number of parameters do. The proposed MGH PDF is used as the prior of image and video signals in designing maximum a posteriori and minimum mean squared error-based estimators for denoising of image and video signals and log-likelihood ratio-based detector for watermarking of image signals. The performance of the estimation and detection techniques are then evaluated in terms of the commonly used metrics. It is shown through extensive experimentations that the estimation and detection techniques developed utilizing the proposed MGH PDF perform substantially better than those that utilize the conventional PDFs. These results confirm that the superior fit of the MGH PDF to the empirical density function resulting from the flexibility of the MGH PDF in choosing the number of parameters, which are functions of higher-order moments of data, leads to the better performance. Thus, the proposed MGH PDF should play a significant role in wavelet-based image and video signal processin

    ON SOME COMMON COMPRESSIVE SENSING RECOVERY ALGORITHMS AND APPLICATIONS

    Get PDF
    Compressive Sensing, as an emerging technique in signal processing is reviewed in this paper together with its’ common applications. As an alternative to the traditional signal sampling, Compressive Sensing allows a new acquisition strategy with significantly reduced number of samples needed for accurate signal reconstruction. The basic ideas and motivation behind this approach are provided in the theoretical part of the paper. The commonly used algorithms for missing data reconstruction are presented. The Compressive Sensing applications have gained significant attention leading to an intensive growth of signal processing possibilities. Hence, some of the existing practical applications assuming different types of signals in real-world scenarios are described and analyzed as well

    Clifford wavelets for fetal ECG extraction

    Full text link
    Analysis of the fetal heart rate during pregnancy is essential for monitoring the proper development of the fetus. Current fetal heart monitoring techniques lack the accuracy in fetal heart rate monitoring and features acquisition, resulting in diagnostic medical issues. The challenge lies in the extraction of the fetal ECG from the mother's ECG during pregnancy. This approach has the advantage of being a reliable and non-invasive technique. For this aim, we propose in this paper a wavelet/multi-wavelet method allowing to extract perfectly the feta ECG parameters from the abdominal mother ECG. The method is essentially due to the exploitation of Clifford wavelets as recent variants in the field. We prove that these wavelets are more efficient and performing against classical ones. The experimental results are therefore due to two basic classes of wavelets and multi-wavelets. A first-class is the classical Haar Schauder, and a second one is due to Clifford valued wavelets and multi-wavelets. These results showed that wavelets/multiwavelets are already good bases for the FECG processing, provided that Clifford ones are the best.Comment: 21 pages, 8 figures, 1 tabl

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios
    corecore