1,187 research outputs found

    Digital watermarking : applicability for developing trust in medical imaging workflows state of the art review

    Get PDF
    Medical images can be intentionally or unintentionally manipulated both within the secure medical system environment and outside, as images are viewed, extracted and transmitted. Many organisations have invested heavily in Picture Archiving and Communication Systems (PACS), which are intended to facilitate data security. However, it is common for images, and records, to be extracted from these for a wide range of accepted practices, such as external second opinion, transmission to another care provider, patient data request, etc. Therefore, confirming trust within medical imaging workflows has become essential. Digital watermarking has been recognised as a promising approach for ensuring the authenticity and integrity of medical images. Authenticity refers to the ability to identify the information origin and prove that the data relates to the right patient. Integrity means the capacity to ensure that the information has not been altered without authorisation. This paper presents a survey of medical images watermarking and offers an evident scene for concerned researchers by analysing the robustness and limitations of various existing approaches. This includes studying the security levels of medical images within PACS system, clarifying the requirements of medical images watermarking and defining the purposes of watermarking approaches when applied to medical images

    Normalized Weighting Schemes for Image Interpolation Algorithms

    Full text link
    This paper presents and evaluates four weighting schemes for image interpolation algorithms. The first scheme is based on the normalized area of the circle, whose diameter is equal to the minimum side of a tetragon. The second scheme is based on the normalized area of the circle, whose radius is equal to the hypotenuse. The third scheme is based on the normalized area of the triangle, whose base and height are equal to the hypotenuse and virtual pixel length, respectively. The fourth weighting scheme is based on the normalized area of the circle, whose radius is equal to the virtual pixel length-based hypotenuse. Experiments demonstrated debatable algorithm performances and the need for further research.Comment: 8 pages, 14 figure

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Multi-Stage Protection Using Pixel Selection Technique for Enhancing Steganography

    Get PDF
    Steganography and data security are extremely important for all organizations. This research introduces a novel stenographic method called multi-stage protection using the pixel selection technique (MPPST). MPPST is developed based on the features of the pixel and analysis technique to extract the pixel's characteristics and distribution of cover-image. A pixel selection technique is proposed for hiding secret messages using the feature selection method. The secret file is distributed and embedded randomly into the stego-image to make the process of the steganalysis complicated.  The attackers not only need to deter which pixel values have been selected to carry the secret file, they also must rearrange the correct sequence of pixels. MPPST generates a complex key that indicates where the encrypted elements of the binary sequence of a secret file are. The analysis stage undergoes four stages, which are the calculation of the peak signal-to-noise ratio, mean squared error, histogram analysis, and relative entropy. These four stages are used to demonstrate the characteristics of the cover image. To evaluate the proposed method, MPPST is compared to the standard technique of Least Significant Bit (LSB) and other algorithms from the literature. The experimental results show that MPPST outperforms other algorithms for all instances and achieves a significant security enhancement

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Data Hiding with Deep Learning: A Survey Unifying Digital Watermarking and Steganography

    Full text link
    Data hiding is the process of embedding information into a noise-tolerant signal such as a piece of audio, video, or image. Digital watermarking is a form of data hiding where identifying data is robustly embedded so that it can resist tampering and be used to identify the original owners of the media. Steganography, another form of data hiding, embeds data for the purpose of secure and secret communication. This survey summarises recent developments in deep learning techniques for data hiding for the purposes of watermarking and steganography, categorising them based on model architectures and noise injection methods. The objective functions, evaluation metrics, and datasets used for training these data hiding models are comprehensively summarised. Finally, we propose and discuss possible future directions for research into deep data hiding techniques

    Reversible Watermarking Using Prediction-Error Expansion and Extreme Learning Machine

    Get PDF
    Currently, the research for reversible watermarking focuses on the decreasing of image distortion. Aiming at this issue, this paper presents an improvement method to lower the embedding distortion based on the prediction-error expansion (PE) technique. Firstly, the extreme learning machine (ELM) with good generalization ability is utilized to enhance the prediction accuracy for image pixel value during the watermarking embedding, and the lower prediction error results in the reduction of image distortion. Moreover, an optimization operation for strengthening the performance of ELM is taken to further lessen the embedding distortion. With two popular predictors, that is, median edge detector (MED) predictor and gradient-adjusted predictor (GAP), the experimental results for the classical images and Kodak image set indicate that the proposed scheme achieves improvement for the lowering of image distortion compared with the classical PE scheme proposed by Thodi et al. and outperforms the improvement method presented by Coltuc and other existing approaches

    Digital rights management techniques for H.264 video

    Get PDF
    This work aims to present a number of low-complexity digital rights management (DRM) methodologies for the H.264 standard. Initially, requirements to enforce DRM are analyzed and understood. Based on these requirements, a framework is constructed which puts forth different possibilities that can be explored to satisfy the objective. To implement computationally efficient DRM methods, watermarking and content based copy detection are then chosen as the preferred methodologies. The first approach is based on robust watermarking which modifies the DC residuals of 4Ă—4 macroblocks within I-frames. Robust watermarks are appropriate for content protection and proving ownership. Experimental results show that the technique exhibits encouraging rate-distortion (R-D) characteristics while at the same time being computationally efficient. The problem of content authentication is addressed with the help of two methodologies: irreversible and reversible watermarks. The first approach utilizes the highest frequency coefficient within 4Ă—4 blocks of the I-frames after CAVLC en- tropy encoding to embed a watermark. The technique was found to be very effect- ive in detecting tampering. The second approach applies the difference expansion (DE) method on IPCM macroblocks within P-frames to embed a high-capacity reversible watermark. Experiments prove the technique to be not only fragile and reversible but also exhibiting minimal variation in its R-D characteristics. The final methodology adopted to enforce DRM for H.264 video is based on the concept of signature generation and matching. Specific types of macroblocks within each predefined region of an I-, B- and P-frame are counted at regular intervals in a video clip and an ordinal matrix is constructed based on their count. The matrix is considered to be the signature of that video clip and is matched with longer video sequences to detect copies within them. Simulation results show that the matching methodology is capable of not only detecting copies but also its location within a longer video sequence. Performance analysis depict acceptable false positive and false negative rates and encouraging receiver operating charac- teristics. Finally, the time taken to match and locate copies is significantly low which makes it ideal for use in broadcast and streaming applications

    Additional information delivery to image content via improved unseen–visible watermarking

    Get PDF
    In a practical watermark scenario, watermarks are used to provide auxiliary information; in this way, an analogous digital approach called unseen–visible watermark has been introduced to deliver auxiliary information. In this algorithm, the embedding stage takes advantage of the visible and invisible watermarking to embed an owner logotype or barcodes as watermarks; in the exhibition stage, the equipped functions of the display devices are used to reveal the watermark to the naked eyes, eliminating any watermark exhibition algorithm. In this paper, a watermark complement strategy for unseen–visible watermarking is proposed to improve the embedding stage, reducing the histogram distortion and the visual degradation of the watermarked image. The presented algorithm exhibits the following contributions: first, the algorithm can be applied to any class of images with large smooth regions of low or high intensity; second, a watermark complement strategy is introduced to reduce the visual degradation and histogram distortion of the watermarked image; and third, an embedding error measurement is proposed. Evaluation results show that the proposed strategy has high performance in comparison with other algorithms, providing a high visual quality of the exhibited watermark and preserving its robustness in terms of readability and imperceptibility against geometric and processing attacks
    • …
    corecore