564 research outputs found

    Robust watermarking for magnetic resonance images with automatic region of interest detection

    Get PDF
    Medical image watermarking requires special considerations compared to ordinary watermarking methods. The first issue is the detection of an important area of the image called the Region of Interest (ROI) prior to starting the watermarking process. Most existing ROI detection procedures use manual-based methods, while in automated methods the robustness against intentional or unintentional attacks has not been considered extensively. The second issue is the robustness of the embedded watermark against different attacks. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this thesis addresses these issues of having automatic ROI detection for magnetic resonance images that are robust against attacks particularly the salt and pepper noise and designing a new watermarking method that can withstand high density salt and pepper noise. In the ROI detection part, combinations of several algorithms such as morphological reconstruction, adaptive thresholding and labelling are utilized. The noise-filtering algorithm and window size correction block are then introduced for further enhancement. The performance of the proposed ROI detection is evaluated by computing the Comparative Accuracy (CA). In the watermarking part, a combination of spatial method, channel coding and noise filtering schemes are used to increase the robustness against salt and pepper noise. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER). Based on experiments, the CA under eight different attacks (speckle noise, average filter, median filter, Wiener filter, Gaussian filter, sharpening filter, motion, and salt and pepper noise) is between 97.8% and 100%. The CA under different densities of salt and pepper noise (10%-90%) is in the range of 75.13% to 98.99%. In the watermarking part, the performance of the proposed method under different densities of salt and pepper noise measured by total PSNR, ROI PSNR, total SSIM and ROI SSIM has improved in the ranges of 3.48-23.03 (dB), 3.5-23.05 (dB), 0-0.4620 and 0-0.5335 to 21.75-42.08 (dB), 20.55-40.83 (dB), 0.5775-0.8874 and 0.4104-0.9742 respectively. In addition, the BER is reduced to the range of 0.02% to 41.7%. To conclude, the proposed method has managed to significantly improve the performance of existing medical image watermarking methods

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Pattern identification of biomedical images with time series: contrasting THz pulse imaging with DCE-MRIs

    Get PDF
    Objective We provide a survey of recent advances in biomedical image analysis and classification from emergent imaging modalities such as terahertz (THz) pulse imaging (TPI) and dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) and identification of their underlining commonalities. Methods Both time and frequency domain signal pre-processing techniques are considered: noise removal, spectral analysis, principal component analysis (PCA) and wavelet transforms. Feature extraction and classification methods based on feature vectors using the above processing techniques are reviewed. A tensorial signal processing de-noising framework suitable for spatiotemporal association between features in MRI is also discussed. Validation Examples where the proposed methodologies have been successful in classifying TPIs and DCE-MRIs are discussed. Results Identifying commonalities in the structure of such heterogeneous datasets potentially leads to a unified multi-channel signal processing framework for biomedical image analysis. Conclusion The proposed complex valued classification methodology enables fusion of entire datasets from a sequence of spatial images taken at different time stamps; this is of interest from the viewpoint of inferring disease proliferation. The approach is also of interest for other emergent multi-channel biomedical imaging modalities and of relevance across the biomedical signal processing community

    Novel two dimensional singular spectrum analysis for effective feature extraction and data classification in hyperspectral imaging

    Get PDF
    Feature extraction is of high importance for effective data classification in hyperspectral imaging (HSI). Considering the high correlation among band images, spectral-domain feature extraction is widely employed. For effective spatial information extraction, a 2-D extension to singular spectrum analysis (SSA), a recent technique for generic data mining and temporal signal analysis, is proposed. With 2D-SSA applied to HSI, each band image is decomposed into varying trend, oscillations and noise. Using the trend and selected oscillations as features, the reconstructed signal, with noise highly suppressed, becomes more robust and effective for data classification. Three publicly available data sets for HSI remote sensing data classification are used in our experiments. Comprehensive results using a support vector machine (SVM) classifier have quantitatively evaluated the efficacy of the proposed approach. Benchmarked with several state-of-the-art methods including 2-D empirical mode decomposition (2D-EMD), it is found that our proposed 2D-SSA approach generates the best results in most cases. Unlike 2D-EMD which requires sequential transforms to obtain detailed decomposition, 2D-SSA extracts all components simultaneously. As a result, the executive time in feature extraction can also be dramatically reduced. The superiority in terms of enhanced discrimination ability from 2D-SSA is further validated when a relatively weak classifier, k-nearest neighbor (k-NN), is used for data classification. In addition, the combination of 2D-SSA with 1D-PCA (2D-SSA-PCA) has generated the best results among several other approaches, which has demonstrated the great potential in combining 2D-SSA with other approaches for effective spatial-spectral feature extraction and dimension reduction in HSI

    Sparse representations of signals for information recovery from incomplete data

    Get PDF
    Mathematical modeling of inverse problems in imaging, such as inpainting, deblurring and denoising, results in ill-posed, i.e. underdetermined linearsystems. Sparseness constraintis used often to regularize these problems.That is because many classes of discrete signals (e.g. naturalimages), when expressed as vectors in a high-dimensional space, are sparse in some predefined basis or a frame(fixed or learned). An efficient approach to basis / frame learning is formulated using the independent component analysis (ICA)and biologically inspired linear model of sparse coding. In the learned basis, the inverse problem of data recovery and removal of impulsive noise is reduced to solving sparseness constrained underdetermined linear system of equations. The same situation occurs in bioinformatics data analysis when novel type of linear mixture model with a reference sample is employed for feature extraction. Extracted features can be used for disease prediction and biomarker identification

    Contourlet Domain Image Modeling and its Applications in Watermarking and Denoising

    Get PDF
    Statistical image modeling in sparse domain has recently attracted a great deal of research interest. Contourlet transform as a two-dimensional transform with multiscale and multi-directional properties is known to effectively capture the smooth contours and geometrical structures in images. The objective of this thesis is to study the statistical properties of the contourlet coefficients of images and develop statistically-based image denoising and watermarking schemes. Through an experimental investigation, it is first established that the distributions of the contourlet subband coefficients of natural images are significantly non-Gaussian with heavy-tails and they can be best described by the heavy-tailed statistical distributions, such as the alpha-stable family of distributions. It is shown that the univariate members of this family are capable of accurately fitting the marginal distributions of the empirical data and that the bivariate members can accurately characterize the inter-scale dependencies of the contourlet coefficients of an image. Based on the modeling results, a new method in image denoising in the contourlet domain is proposed. The Bayesian maximum a posteriori and minimum mean absolute error estimators are developed to determine the noise-free contourlet coefficients of grayscale and color images. Extensive experiments are conducted using a wide variety of images from a number of databases to evaluate the performance of the proposed image denoising scheme and to compare it with that of other existing schemes. It is shown that the proposed denoising scheme based on the alpha-stable distributions outperforms these other methods in terms of the peak signal-to-noise ratio and mean structural similarity index, as well as in terms of visual quality of the denoised images. The alpha-stable model is also used in developing new multiplicative watermark schemes for grayscale and color images. Closed-form expressions are derived for the log-likelihood-based multiplicative watermark detection algorithm for grayscale images using the univariate and bivariate Cauchy members of the alpha-stable family. A multiplicative multichannel watermark detector is also designed for color images using the multivariate Cauchy distribution. Simulation results demonstrate not only the effectiveness of the proposed image watermarking schemes in terms of the invisibility of the watermark, but also the superiority of the watermark detectors in providing detection rates higher than that of the state-of-the-art schemes even for the watermarked images undergone various kinds of attacks

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future

    Face Liveness Detection under Processed Image Attacks

    Get PDF
    Face recognition is a mature and reliable technology for identifying people. Due to high-definition cameras and supporting devices, it is considered the fastest and the least intrusive biometric recognition modality. Nevertheless, effective spoofing attempts on face recognition systems were found to be possible. As a result, various anti-spoofing algorithms were developed to counteract these attacks. They are commonly referred in the literature a liveness detection tests. In this research we highlight the effectiveness of some simple, direct spoofing attacks, and test one of the current robust liveness detection algorithms, i.e. the logistic regression based face liveness detection from a single image, proposed by the Tan et al. in 2010, against malicious attacks using processed imposter images. In particular, we study experimentally the effect of common image processing operations such as sharpening and smoothing, as well as corruption with salt and pepper noise, on the face liveness detection algorithm, and we find that it is especially vulnerable against spoofing attempts using processed imposter images. We design and present a new facial database, the Durham Face Database, which is the first, to the best of our knowledge, to have client, imposter as well as processed imposter images. Finally, we evaluate our claim on the effectiveness of proposed imposter image attacks using transfer learning on Convolutional Neural Networks. We verify that such attacks are more difficult to detect even when using high-end, expensive machine learning techniques

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    Symmetry-Adapted Machine Learning for Information Security

    Get PDF
    Symmetry-adapted machine learning has shown encouraging ability to mitigate the security risks in information and communication technology (ICT) systems. It is a subset of artificial intelligence (AI) that relies on the principles of processing future events by learning past events or historical data. The autonomous nature of symmetry-adapted machine learning supports effective data processing and analysis for security detection in ICT systems without the interference of human authorities. Many industries are developing machine-learning-adapted solutions to support security for smart hardware, distributed computing, and the cloud. In our Special Issue book, we focus on the deployment of symmetry-adapted machine learning for information security in various application areas. This security approach can support effective methods to handle the dynamic nature of security attacks by extraction and analysis of data to identify hidden patterns of data. The main topics of this Issue include malware classification, an intrusion detection system, image watermarking, color image watermarking, battlefield target aggregation behavior recognition model, IP camera, Internet of Things (IoT) security, service function chain, indoor positioning system, and crypto-analysis
    corecore