609 research outputs found

    Normalized Weighting Schemes for Image Interpolation Algorithms

    Full text link
    This paper presents and evaluates four weighting schemes for image interpolation algorithms. The first scheme is based on the normalized area of the circle, whose diameter is equal to the minimum side of a tetragon. The second scheme is based on the normalized area of the circle, whose radius is equal to the hypotenuse. The third scheme is based on the normalized area of the triangle, whose base and height are equal to the hypotenuse and virtual pixel length, respectively. The fourth weighting scheme is based on the normalized area of the circle, whose radius is equal to the virtual pixel length-based hypotenuse. Experiments demonstrated debatable algorithm performances and the need for further research.Comment: 8 pages, 14 figure

    Reversible Image Watermarking Using Modified Quadratic Difference Expansion and Hybrid Optimization Technique

    Get PDF
    With increasing copyright violation cases, watermarking of digital images is a very popular solution for securing online media content. Since some sensitive applications require image recovery after watermark extraction, reversible watermarking is widely preferred. This article introduces a Modified Quadratic Difference Expansion (MQDE) and fractal encryption-based reversible watermarking for securing the copyrights of images. First, fractal encryption is applied to watermarks using Tromino's L-shaped theorem to improve security. In addition, Cuckoo Search-Grey Wolf Optimization (CSGWO) is enforced on the cover image to optimize block allocation for inserting an encrypted watermark such that it greatly increases its invisibility. While the developed MQDE technique helps to improve coverage and visual quality, the novel data-driven distortion control unit ensures optimal performance. The suggested approach provides the highest level of protection when retrieving the secret image and original cover image without losing the essential information, apart from improving transparency and capacity without much tradeoff. The simulation results of this approach are superior to existing methods in terms of embedding capacity. With an average PSNR of 67 dB, the method shows good imperceptibility in comparison to other schemes

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Data Hiding with Deep Learning: A Survey Unifying Digital Watermarking and Steganography

    Full text link
    Data hiding is the process of embedding information into a noise-tolerant signal such as a piece of audio, video, or image. Digital watermarking is a form of data hiding where identifying data is robustly embedded so that it can resist tampering and be used to identify the original owners of the media. Steganography, another form of data hiding, embeds data for the purpose of secure and secret communication. This survey summarises recent developments in deep learning techniques for data hiding for the purposes of watermarking and steganography, categorising them based on model architectures and noise injection methods. The objective functions, evaluation metrics, and datasets used for training these data hiding models are comprehensively summarised. Finally, we propose and discuss possible future directions for research into deep data hiding techniques

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Real-time image dehazing by superpixels segmentation and guidance filter

    Get PDF
    Haze and fog had a great influence on the quality of images, and to eliminate this, dehazing and defogging are applied. For this purpose, an effective and automatic dehazing method is proposed. To dehaze a hazy image, we need to estimate two important parameters such as atmospheric light and transmission map. For atmospheric light estimation, the superpixels segmentation method is used to segment the input image. Then each superpixel intensities are summed and further compared with each superpixel individually to extract the maximum intense superpixel. Extracting the maximum intense superpixel from the outdoor hazy image automatically selects the hazy region (atmospheric light). Thus, we considered the individual channel intensities of the extracted maximum intense superpixel as an atmospheric light for our proposed algorithm. Secondly, on the basis of measured atmospheric light, an initial transmission map is estimated. The transmission map is further refined through a rolling guidance filter that preserves much of the image information such as textures, structures and edges in the final dehazed output. Finally, the haze-free image is produced by integrating the atmospheric light and refined transmission with the haze imaging model. Through detailed experimentation on several publicly available datasets, we showed that the proposed model achieved higher accuracy and can restore high-quality dehazed images as compared to the state-of-the-art models. The proposed model could be deployed as a real-time application for real-time image processing, real-time remote sensing images, real-time underwater images enhancement, video-guided transportation, outdoor surveillance, and auto-driver backed systems

    Diff-Privacy: Diffusion-based Face Privacy Protection

    Full text link
    Privacy protection has become a top priority as the proliferation of AI techniques has led to widespread collection and misuse of personal data. Anonymization and visual identity information hiding are two important facial privacy protection tasks that aim to remove identification characteristics from facial images at the human perception level. However, they have a significant difference in that the former aims to prevent the machine from recognizing correctly, while the latter needs to ensure the accuracy of machine recognition. Therefore, it is difficult to train a model to complete these two tasks simultaneously. In this paper, we unify the task of anonymization and visual identity information hiding and propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy. Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image. Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding. Extensive experiments have been conducted to validate the effectiveness of our proposed framework in protecting facial privacy.Comment: 17page

    Triple scheme based on image steganography to improve imperceptibility and security

    Get PDF
    A foremost priority in the information technology and communication era is achieving an effective and secure steganography scheme when considering information hiding. Commonly, the digital images are used as the cover for the steganography owing to their redundancy in the representation, making them hidden to the intruders. Nevertheless, any steganography system launched over the internet can be attacked upon recognizing the stego cover. Presently, the design and development of an effective image steganography system are facing several challenging issues including the low capacity, poor security, and imperceptibility. Towards overcoming the aforementioned issues, a new decomposition scheme was proposed for image steganography with a new approach known as a Triple Number Approach (TNA). In this study, three main stages were used to achieve objectives and overcome the issues of image steganography, beginning with image and text preparation, followed by embedding and culminating in extraction. Finally, the evaluation stage employed several evaluations in order to benchmark the results. Different contributions were presented with this study. The first contribution was a Triple Text Coding Method (TTCM), which was related to the preparation of secret messages prior to the embedding process. The second contribution was a Triple Embedding Method (TEM), which was related to the embedding process. The third contribution was related to security criteria which were based on a new partitioning of an image known as the Image Partitioning Method (IPM). The IPM proposed a random pixel selection, based on image partitioning into three phases with three iterations of the Hénon Map function. An enhanced Huffman coding algorithm was utilized to compress the secret message before TTCM process. A standard dataset from the Signal and Image Processing Institute (SIPI) containing color and grayscale images with 512 x 512 pixels were utilised in this study. Different parameters were used to test the performance of the proposed scheme based on security and imperceptibility (image quality). In image quality, four important measurements that were used are Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Mean Square Error (MSE) and Histogram analysis. Whereas, two security measurements that were used are Human Visual System (HVS) and Chi-square (X2) attacks. In terms of PSNR and SSIM, the Lena grayscale image obtained results were 78.09 and 1 dB, respectively. Meanwhile, the HVS and X2 attacks obtained high results when compared to the existing scheme in the literature. Based on the findings, the proposed scheme give evidence to increase capacity, imperceptibility, and security to overcome existing issues

    Visual Content Privacy Protection: A Survey

    Full text link
    Vision is the most important sense for people, and it is also one of the main ways of cognition. As a result, people tend to utilize visual content to capture and share their life experiences, which greatly facilitates the transfer of information. Meanwhile, it also increases the risk of privacy violations, e.g., an image or video can reveal different kinds of privacy-sensitive information. Researchers have been working continuously to develop targeted privacy protection solutions, and there are several surveys to summarize them from certain perspectives. However, these surveys are either problem-driven, scenario-specific, or technology-specific, making it difficult for them to summarize the existing solutions in a macroscopic way. In this survey, a framework that encompasses various concerns and solutions for visual privacy is proposed, which allows for a macro understanding of privacy concerns from a comprehensive level. It is based on the fact that privacy concerns have corresponding adversaries, and divides privacy protection into three categories, based on computer vision (CV) adversary, based on human vision (HV) adversary, and based on CV \& HV adversary. For each category, we analyze the characteristics of the main approaches to privacy protection, and then systematically review representative solutions. Open challenges and future directions for visual privacy protection are also discussed.Comment: 24 pages, 13 figure
    corecore