42 research outputs found

    Fast Mojette Transform for Discrete Tomography

    Full text link
    A new algorithm for reconstructing a two dimensional object from a set of one dimensional projected views is presented that is both computationally exact and experimentally practical. The algorithm has a computational complexity of O(n log2 n) with n = N^2 for an NxN image, is robust in the presence of noise and produces no artefacts in the reconstruction process, as is the case with conventional tomographic methods. The reconstruction process is approximation free because the object is assumed to be discrete and utilizes fully discrete Radon transforms. Noise in the projection data can be suppressed further by introducing redundancy in the reconstruction. The number of projections required for exact reconstruction and the response to noise can be controlled without comprising the digital nature of the algorithm. The digital projections are those of the Mojette Transform, a form of discrete linogram. A simple analytical mapping is developed that compacts these projections exactly into symmetric periodic slices within the Discrete Fourier Transform. A new digital angle set is constructed that allows the periodic slices to completely fill all of the objects Discrete Fourier space. Techniques are proposed to acquire these digital projections experimentally to enable fast and robust two dimensional reconstructions.Comment: 22 pages, 13 figures, Submitted to Elsevier Signal Processin

    Secure and Robust Image Watermarking Scheme Using Homomorphic Transform, SVD and Arnold Transform in RDWT Domain

    Get PDF
    The main objective for a watermarking technique is to attain imperceptibility, robustness and security against various malicious attacks applied by illicit users. To fulfil these basic requirements for a scheme is a big issue of concern. So, in this paper, a new image watermarking method is proposed which utilizes properties of homomorphic transform, Redundant Discrete Wavelet Transform (RDWT), Arnold Transform (AT) along with Singular Value Decomposition (SVD) to attain these required properties. RDWT is performed on host image to achieve LL subband. This LL subband image is further decomposed into illumination and reflectance components by homomorphic transform. In order to strengthen security of proposed scheme, AT is used to scramble watermark. This scrambled watermark is embedded with Singular Values (SVs) of reflectance component which are obtained by applying SVD to it. Since reflectance component contains important features of image, therefore, embedding of watermark in this part provides excellent imperceptibility. Proposed scheme is comprehensively examined against different attacks like scaling, shearing etc. for its robustness. Comparative study with other prevailing algorithms clearly reveals superiority of proposed scheme in terms of robustness and imperceptibility

    A review of compressive sensing in information security field

    Full text link
    The applications of compressive sensing (CS) in the fi eld of information security have captured a great deal of researchers\u27 attention in the past decade. To supply guidance for researchers from a comprehensive perspective, this paper, for the fi rst time, reviews CS in information security field from two aspects: theoretical security and application security. Moreover, the CS applied in image cipher is one of the most widespread applications, as its characteristics of dimensional reduction and random projection can be utilized and integrated into image cryptosystems, which can achieve simultaneous compression and encryption of an image or multiple images. With respect to this application, the basic framework designs and the corresponding analyses are investigated. Speci fically, the investigation proceeds from three aspects, namely, image ciphers based on chaos and CS, image ciphers based on optics and CS, and image ciphers based on chaos, optics, and CS. A total of six frameworks are put forward. Meanwhile, their analyses in terms of security, advantages, disadvantages, and so on are presented. At last, we attempt to indicate some other possible application research topics in future

    Digital Image Processing

    Get PDF
    Newspapers and the popular scientific press today publish many examples of highly impressive images. These images range, for example, from those showing regions of star birth in the distant Universe to the extent of the stratospheric ozone depletion over Antarctica in springtime, and to those regions of the human brain affected by Alzheimer’s disease. Processed digitally to generate spectacular images, often in false colour, they all make an immediate and deep impact on the viewer’s imagination and understanding. Professor Jonathan Blackledge’s erudite but very useful new treatise Digital Image Processing: Mathematical and Computational Methods explains both the underlying theory and the techniques used to produce such images in considerable detail. It also provides many valuable example problems - and their solutions - so that the reader can test his/her grasp of the physical, mathematical and numerical aspects of the particular topics and methods discussed. As such, this magnum opus complements the author’s earlier work Digital Signal Processing. Both books are a wonderful resource for students who wish to make their careers in this fascinating and rapidly developing field which has an ever increasing number of areas of application. The strengths of this large book lie in: • excellent explanatory introduction to the subject; • thorough treatment of the theoretical foundations, dealing with both electromagnetic and acoustic wave scattering and allied techniques; • comprehensive discussion of all the basic principles, the mathematical transforms (e.g. the Fourier and Radon transforms), their interrelationships and, in particular, Born scattering theory and its application to imaging systems modelling; discussion in detail - including the assumptions and limitations - of optical imaging, seismic imaging, medical imaging (using ultrasound), X-ray computer aided tomography, tomography when the wavelength of the probing radiation is of the same order as the dimensions of the scatterer, Synthetic Aperture Radar (airborne or spaceborne), digital watermarking and holography; detail devoted to the methods of implementation of the analytical schemes in various case studies and also as numerical packages (especially in C/C++); • coverage of deconvolution, de-blurring (or sharpening) an image, maximum entropy techniques, Bayesian estimators, techniques for enhancing the dynamic range of an image, methods of filtering images and techniques for noise reduction; • discussion of thresholding, techniques for detecting edges in an image and for contrast stretching, stochastic scattering (random walk models) and models for characterizing an image statistically; • investigation of fractal images, fractal dimension segmentation, image texture, the coding and storing of large quantities of data, and image compression such as JPEG; • valuable summary of the important results obtained in each Chapter given at its end; • suggestions for further reading at the end of each Chapter. I warmly commend this text to all readers, and trust that they will find it to be invaluable. Professor Michael J Rycroft Visiting Professor at the International Space University, Strasbourg, France, and at Cranfield University, England

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Multimedia Forensic Analysis via Intrinsic and Extrinsic Fingerprints

    Get PDF
    Digital imaging has experienced tremendous growth in recent decades, and digital images have been used in a growing number of applications. With such increasing popularity of imaging devices and the availability of low-cost image editing software, the integrity of image content can no longer be taken for granted. A number of forensic and provenance questions often arise, including how an image was generated; from where an image was from; what has been done on the image since its creation, by whom, when and how. This thesis presents two different sets of techniques to address the problem via intrinsic and extrinsic fingerprints. The first part of this thesis introduces a new methodology based on intrinsic fingerprints for forensic analysis of digital images. The proposed method is motivated by the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on the final output data. We present methods to identify these intrinsic fingerprints via component forensic analysis, and demonstrate that these traces can serve as useful features for such forensic applications as to build a robust device identifier and to identify potential technology infringement or licensing. Building upon component forensics, we develop a general authentication and provenance framework to reconstruct the processing history of digital images. We model post-device processing as a manipulation filter and estimate its coefficients using a linear time invariant approximation. Absence of in-device fingerprints, presence of new post-device fingerprints, or any inconsistencies in the estimated fingerprints across different regions of the test image all suggest that the image is not a direct device output and has possibly undergone some kind of processing, such as content tampering or steganographic embedding, after device capture. While component forensics is widely applicable in a number of scenarios, it has performance limitations. To understand the fundamental limits of component forensics, we develop a new theoretical framework based on estimation and pattern classification theories, and define formal notions of forensic identifiability and classifiability of components. We show that the proposed framework provides a solid foundation to study information forensics and helps design optimal input patterns to improve parameter estimation accuracy via semi non-intrusive forensics. The final part of the thesis investigates a complementing extrinsic approach via image hashing that can be used for content-based image authentication and other media security applications. We show that the proposed hashing algorithm is robust to common signal processing operations and present a systematic evaluation of the security of image hash against estimation and forgery attacks

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    corecore