198 research outputs found

    Resilient Digital Image Watermarking Using a DCT- Component Perturbation Model

    Get PDF
    The applications of the Discrete Cosine Transform (DCT) for Computer Generated Imagery, image processingand, in particular, image compression are well known and the DCT also forms the central kernel for a number ofdigital image watermarking methods. In this paper we consider the application of the DCT for producing a highlyrobust method of watermarking images using a block partitioning approach subject to a self-alignment strategyand bit error correction. The applications for the algorithms presented include the copyright protection of imagesand Digital Right Management for image libraries, for example. However, the principal focus of the researchreported in this paper is on the use of print-scan and e-display-scan image authentication for use in e-ticketswhere QR code, for example, are embedded in an full colour image of the ticket holder. This requires that a DCTembedding procedure is developed that is highly robust to blur, noise, geometric distortions such as rotation, shift and barrel and the partial removal of image segments, all of which are consider ed in regard to the resilience of the method proposed and its practical realisation in a real operating environment

    Resilient Digital Image Watermarking for Document Authentication

    Get PDF
    Abstract—We consider the applications of the Discrete Cosine Transform (DCT) and then a Chirp coding method for producing a highly robust system for watermarking images using a block partitioning approach subject to a self-alignment strategy and bit error correction. The applications for the algorithms presented and the system developed include the copyright protection of images and Digital Right Management for image libraries, for example. However, the principal focus of the research reported in this paper is on the use of printscan and e-display-scan image authentication for use in e-tickets where QR code, for example, are embedded in a full colour image of the ticket holder. This requires that an embedding procedure is developed that is highly robust to blur, noise, geometric distortions such as rotation, shift and barrel and the partial removal of image segments, all of which are considered in regard to the resilience of the method proposed and its practical realisation in a real operating environment

    Multi-algorithmic Cryptography using Deterministic Chaos with Applications to Mobile Communications

    Get PDF
    In this extended paper, we present an overview of the principal issues associated with cryptography, providing historically significant examples for illustrative purposes as part of a short tutorial for readers that are not familiar with the subject matter. This is used to introduce the role that nonlinear dynamics and chaos play in the design of encryption engines which utilize different types of Iteration Function Systems (IFS). The design of such encryption engines requires that they conform to the principles associated with diffusion and confusion for generating ciphers that are of a maximum entropy type. For this reason, the role of confusion and diffusion in cryptography is discussed giving a design guide to the construction of ciphers that are based on the use of IFS. We then present the background and operating framework associated with a new product - CrypsticTM - which is based on the application of multi-algorithmic IFS to design encryption engines mounted on a USB memory stick using both disinformation and obfuscation to ‘hide’ a forensically inert application. The protocols and procedures associated with the use of this product are also briefly discussed

    Persistent Homology Tools for Image Analysis

    Get PDF
    Topological Data Analysis (TDA) is a new field of mathematics emerged rapidly since the first decade of the century from various works of algebraic topology and geometry. The goal of TDA and its main tool of persistent homology (PH) is to provide topological insight into complex and high dimensional datasets. We take this premise onboard to get more topological insight from digital image analysis and quantify tiny low-level distortion that are undetectable except possibly by highly trained persons. Such image distortion could be caused intentionally (e.g. by morphing and steganography) or naturally in abnormal human tissue/organ scan images as a result of onset of cancer or other diseases. The main objective of this thesis is to design new image analysis tools based on persistent homological invariants representing simplicial complexes on sets of pixel landmarks over a sequence of distance resolutions. We first start by proposing innovative automatic techniques to select image pixel landmarks to build a variety of simplicial topologies from a single image. Effectiveness of each image landmark selection demonstrated by testing on different image tampering problems such as morphed face detection, steganalysis and breast tumour detection. Vietoris-Rips simplicial complexes constructed based on the image landmarks at an increasing distance threshold and topological (homological) features computed at each threshold and summarized in a form known as persistent barcodes. We vectorise the space of persistent barcodes using a technique known as persistent binning where we demonstrated the strength of it for various image analysis purposes. Different machine learning approaches are adopted to develop automatic detection of tiny texture distortion in many image analysis applications. Homological invariants used in this thesis are the 0 and 1 dimensional Betti numbers. We developed an innovative approach to design persistent homology (PH) based algorithms for automatic detection of the above described types of image distortion. In particular, we developed the first PH-detector of morphing attacks on passport face biometric images. We shall demonstrate significant accuracy of 2 such morph detection algorithms with 4 types of automatically extracted image landmarks: Local Binary patterns (LBP), 8-neighbour super-pixels (8NSP), Radial-LBP (R-LBP) and centre-symmetric LBP (CS-LBP). Using any of these techniques yields several persistent barcodes that summarise persistent topological features that help gaining insights into complex hidden structures not amenable by other image analysis methods. We shall also demonstrate significant success of a similarly developed PH-based universal steganalysis tool capable for the detection of secret messages hidden inside digital images. We also argue through a pilot study that building PH records from digital images can differentiate breast malignant tumours from benign tumours using digital mammographic images. The research presented in this thesis creates new opportunities to build real applications based on TDA and demonstrate many research challenges in a variety of image processing/analysis tasks. For example, we describe a TDA-based exemplar image inpainting technique (TEBI), superior to existing exemplar algorithm, for the reconstruction of missing image regions

    Digital watermarking methods for data security and authentication

    Get PDF
    Philosophiae Doctor - PhDCryptology is the study of systems that typically originate from a consideration of the ideal circumstances under which secure information exchange is to take place. It involves the study of cryptographic and other processes that might be introduced for breaking the output of such systems - cryptanalysis. This includes the introduction of formal mathematical methods for the design of a cryptosystem and for estimating its theoretical level of securit

    Deep Learning for Reversible Steganography: Principles and Insights

    Get PDF
    Deep-learning\textendash{centric} reversible steganography has emerged as a promising research paradigm. A direct way of applying deep learning to reversible steganography is to construct a pair of encoder and decoder, whose parameters are trained jointly, thereby learning the steganographic system as a whole. This end-to-end framework, however, falls short of the reversibility requirement because it is difficult for this kind of monolithic system, as a black box, to create or duplicate intricate reversible mechanisms. In response to this issue, a recent approach is to carve up the steganographic system and work on modules independently. In particular, neural networks are deployed in an analytics module to learn the data distribution, while an established mechanism is called upon to handle the remaining tasks. In this paper, we investigate the modular framework and deploy deep neural networks in a reversible steganographic scheme referred to as prediction-error modulation, in which an analytics module serves the purpose of pixel intensity prediction. The primary focus of this study is on deep-learning\textendash{based} context-aware pixel intensity prediction. We address the unsolved issues reported in related literature, including the impact of pixel initialisation on prediction accuracy and the influence of uncertainty propagation in dual-layer embedding. Furthermore, we establish a connection between context-aware pixel intensity prediction and low-level computer vision and analyse the performance of several advanced neural networks

    The Wiltshire Wills Feasibility Study

    Get PDF
    The Wiltshire and Swindon Record Office has nearly ninety thousand wills in its care. These records are neither adequately catalogued nor secured against loss by facsimile microfilm copies. With support from the Heritage Lottery Fund the Record Office has begun to produce suitable finding aids for the material. Beginning with this feasibility study the Record Office is developing a strategy to ensure the that facsimiles to protect the collection against risk of loss or damage and to improve public access are created.<p></p> This feasibility study explores the different methodologies that can be used to assist the preservation and conservation of the collection and improve public access to it. The study aims to produce a strategy that will enable the Record Office to create digital facsimiles of the Wills in its care for access purposes and to also create preservation quality microfilms. The strategy aims to seek the most cost effective and time efficient approach to the problem and identifies ways to optimise the processes by drawing on the experience of other similar projects. This report provides a set of guidelines and recommendations to ensure the best use of the resources available for to provide the most robust preservation strategy and to ensure that future access to the Wills as an information resource can be flexible, both local and remote, and sustainable

    Big Data Security (Volume 3)

    Get PDF
    After a short description of the key concepts of big data the book explores on the secrecy and security threats posed especially by cloud based data storage. It delivers conceptual frameworks and models along with case studies of recent technology
    • …
    corecore