28 research outputs found

    Wavelet techniques for reversible data embedding into images

    Get PDF
    The proliferation of digital information in our society has enticed a lot of research into data embedding techniques that add information to digital content like images, audio and video. This additional information can be used for various purposes and different applications place different requirements on the embedding techniques. In this paper, we investigate high capacity lossless data embedding methods that allow one to embed large amounts of data into digital images (or video) in such a way that the original image can be reconstructed from the watermarked image. The paper starts by briefly reviewing three existing lossless data embedding techniques as described by Fridrich and co-authors, by Tian, and by Celik and co-workers. We then present two new techniques: one based on least significant bit prediction and Sweldens' lifting scheme and another that is an improvement of Tian's technique of difference expansion. The various embedding methods are then compared in terms of capacity-distortion behaviour, embedding speed, and capacity control

    AXMEDIS 2008

    Get PDF
    The AXMEDIS International Conference series aims to explore all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, protection and rights management, to address the latest developments and future trends of the technologies and their applications, impacts and exploitation. The AXMEDIS events offer venues for exchanging concepts, requirements, prototypes, research ideas, and findings which could contribute to academic research and also benefit business and industrial communities. In the Internet as well as in the digital era, cross-media production and distribution represent key developments and innovations that are fostered by emergent technologies to ensure better value for money while optimising productivity and market coverage

    Data Encryption and Hashing Schemes for Multimedia Protection

    Get PDF
    There are millions of people using social networking sites like Facebook, Google+, and Youtube every single day across the entire world for sharing photos and other digital media. Unfortunately, sometimes people publish content that does not belong to them. As a result, there is an increasing demand for quality software capable of providing maximum protection for copyrighted material. In addition, confidential content such as medical images and patient records require high level of security so that they can be protected from unintended disclosure, when transferred over the Internet. On the other hand, decreasing the size of an image without significant loss in quality is always highly desirable. Hence, the need for efficient compression algorithms. This thesis introduces a robust method for image compression in the shearlet domain. Motivated by the outperformance of the Discrete Shearlet Transform (DST) compared to the Discrete Wavelet Transform (DWT) in encoding the directional information in images, we propose a DST-based compression algorithm that provides not only a better quality in terms of image approximation and compression ratio, but also increases the security of images via the Advanced Encryption Standard. Experimental results on a slew of medical images illustrate an improved performance in image quality of the proposed approximation approach in comparison to DWT, and also demonstrate its robustness against a variety of tests, including randomness, entropy, key sensitivity, and input sensitivity. We also present a 3D mesh hashing technique using spectral graph theory. The main idea is to partition a 3D model into sub-meshes, followed by the generation of the Laplace-Beltrami matrix of each sub-mesh, and the application of eigen-decomposition. This, in turn, is followed by the hashing of each sub-mesh using Tsallis entropy. The experimental results using a benchmark 3D models demonstrate the effectiveness of the proposed hashing scheme

    Digital Watermarking for Verification of Perception-based Integrity of Audio Data

    Get PDF
    In certain application fields digital audio recordings contain sensitive content. Examples are historical archival material in public archives that preserve our cultural heritage, or digital evidence in the context of law enforcement and civil proceedings. Because of the powerful capabilities of modern editing tools for multimedia such material is vulnerable to doctoring of the content and forgery of its origin with malicious intent. Also inadvertent data modification and mistaken origin can be caused by human error. Hence, the credibility and provenience in terms of an unadulterated and genuine state of such audio content and the confidence about its origin are critical factors. To address this issue, this PhD thesis proposes a mechanism for verifying the integrity and authenticity of digital sound recordings. It is designed and implemented to be insensitive to common post-processing operations of the audio data that influence the subjective acoustic perception only marginally (if at all). Examples of such operations include lossy compression that maintains a high sound quality of the audio media, or lossless format conversions. It is the objective to avoid de facto false alarms that would be expectedly observable in standard crypto-based authentication protocols in the presence of these legitimate post-processing. For achieving this, a feasible combination of the techniques of digital watermarking and audio-specific hashing is investigated. At first, a suitable secret-key dependent audio hashing algorithm is developed. It incorporates and enhances so-called audio fingerprinting technology from the state of the art in contentbased audio identification. The presented algorithm (denoted as ”rMAC” message authentication code) allows ”perception-based” verification of integrity. This means classifying integrity breaches as such not before they become audible. As another objective, this rMAC is embedded and stored silently inside the audio media by means of audio watermarking technology. This approach allows maintaining the authentication code across the above-mentioned admissible post-processing operations and making it available for integrity verification at a later date. For this, an existent secret-key ependent audio watermarking algorithm is used and enhanced in this thesis work. To some extent, the dependency of the rMAC and of the watermarking processing from a secret key also allows authenticating the origin of a protected audio. To elaborate on this security aspect, this work also estimates the brute-force efforts of an adversary attacking this combined rMAC-watermarking approach. The experimental results show that the proposed method provides a good distinction and classification performance of authentic versus doctored audio content. It also allows the temporal localization of audible data modification within a protected audio file. The experimental evaluation finally provides recommendations about technical configuration settings of the combined watermarking-hashing approach. Beyond the main topic of perception-based data integrity and data authenticity for audio, this PhD work provides new general findings in the fields of audio fingerprinting and digital watermarking. The main contributions of this PhD were published and presented mainly at conferences about multimedia security. These publications were cited by a number of other authors and hence had some impact on their works

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    The Effect of Code Obfuscation on Authorship Attribution of Binary Computer Files

    Get PDF
    In many forensic investigations, questions linger regarding the identity of the authors of the software specimen. Research has identified methods for the attribution of binary files that have not been obfuscated, but a significant percentage of malicious software has been obfuscated in an effort to hide both the details of its origin and its true intent. Little research has been done around analyzing obfuscated code for attribution. In part, the reason for this gap in the research is that deobfuscation of an unknown program is a challenging task. Further, the additional transformation of the executable file introduced by the obfuscator modifies or removes features from the original executable that would have been used in the author attribution process. Existing research has demonstrated good success in attributing the authorship of an executable file of unknown provenance using methods based on static analysis of the specimen file. With the addition of file obfuscation, static analysis of files becomes difficult, time consuming, and in some cases, may lead to inaccurate findings. This paper presents a novel process for authorship attribution using dynamic analysis methods. A software emulated system was fully instrumented to become a test harness for a specimen of unknown provenance, allowing for supervised control, monitoring, and trace data collection during execution. This trace data was used as input into a supervised machine learning algorithm trained to identify stylometric differences in the specimen under test and provide predictions on who wrote the specimen. The specimen files were also analyzed for authorship using static analysis methods to compare prediction accuracies with prediction accuracies gathered from this new, dynamic analysis based method. Experiments indicate that this new method can provide better accuracy of author attribution for files of unknown provenance, especially in the case where the specimen file has been obfuscated

    Variations and Application Conditions Of the Data Type »Image« - The Foundation of Computational Visualistics

    Get PDF
    Few years ago, the department of computer science of the University Magdeburg invented a completely new diploma programme called 'computational visualistics', a curriculum dealing with all aspects of computational pictures. Only isolated aspects had been studied so far in computer science, particularly in the independent domains of computer graphics, image processing, information visualization, and computer vision. So is there indeed a coherent domain of research behind such a curriculum? The answer to that question depends crucially on a data structure that acts as a mediator between general visualistics and computer science: the data structure "image". The present text investigates that data structure, its components, and its application conditions, and thus elaborates the very foundations of computational visualistics as a unique and homogenous field of research. Before concentrating on that data structure, the theory of pictures in general and the definition of pictures as perceptoid signs in particular are closely examined. This includes an act-theoretic consideration about resemblance as the crucial link between image and object, the communicative function of context building as the central concept for comparing pictures and language, and several modes of reflection underlying the relation between image and image user. In the main chapter, the data structure "image" is extendedly analyzed under the perspectives of syntax, semantics, and pragmatics. While syntactic aspects mostly concern image processing, semantic questions form the core of computer graphics and computer vision. Pragmatic considerations are particularly involved with interactive pictures but also extend to the field of information visualization and even to computer art. Four case studies provide practical applications of various aspects of the analysis

    3D-in-2D Displays for ATC.

    Get PDF
    This paper reports on the efforts and accomplishments of the 3D-in-2D Displays for ATC project at the end of Year 1. We describe the invention of 10 novel 3D/2D visualisations that were mostly implemented in the Augmented Reality ARToolkit. These prototype implementations of visualisation and interaction elements can be viewed on the accompanying video. We have identified six candidate design concepts which we will further research and develop. These designs correspond with the early feasibility studies stage of maturity as defined by the NASA Technology Readiness Level framework. We developed the Combination Display Framework from a review of the literature, and used it for analysing display designs in terms of display technique used and how they are combined. The insights we gained from this framework then guided our inventions and the human-centered innovation process we use to iteratively invent. Our designs are based on an understanding of user work practices. We also developed a simple ATC simulator that we used for rapid experimentation and evaluation of design ideas. We expect that if this project continues, the effort in Year 2 and 3 will be focus on maturing the concepts and employment in a operational laboratory settings

    Content based image retrieval with image signatures

    Get PDF
    This thesis develops a system to search for relevant images when user inputs a particular image as a query. The concept is similar to text search in Google or Yahoo. However, understanding image content is more difficult than text content. The system provides a method to retrieve similar images pertaining to the query easily and quickly. It allows end users to refine the original query iteratively where they have no effective way to reformulate the original image query. The results from empirical evaluations suggest that our system is fast, provides a broad spectrum of images even with underlying changes
    corecore