31 research outputs found

    High Capacity Analog Channels for Smart Documents

    Get PDF
    Widely-used valuable hardcopy documents such as passports, visas, driving licenses, educational certificates, entrance-passes for entertainment events etc. are conventionally protected against counterfeiting and data tampering attacks by applying analog security technologies (e.g. KINEGRAMS®, holograms, micro-printing, UV/IR inks etc.). How-ever, easy access to high quality, low price modern desktop publishing technology has left most of these technologies ineffective, giving rise to high quality false documents. The higher price and restricted usage are other drawbacks of the analog document pro-tection techniques. Digital watermarking and high capacity storage media such as IC-chips, optical data stripes etc. are the modern technologies being used in new machine-readable identity verification documents to ensure contents integrity; however, these technologies are either expensive or do not satisfy the application needs and demand to look for more efficient document protection technologies. In this research three different high capacity analog channels: high density data stripe (HD-DataStripe), data hiding in printed halftone images (watermarking), and super-posed constant background grayscale image (CBGI) are investigated for hidden com-munication along with their applications in smart documents. On way to develop high capacity analog channels, noise encountered from printing and scanning (PS) process is investigated with the objective to recover the digital information encoded at nearly maximum channel utilization. By utilizing noise behaviour, countermeasures against the noise are taken accordingly in data recovery process. HD-DataStripe is a printed binary image similar to the conventional 2-D barcodes (e.g. PDF417), but it offers much higher data storage capacity and is intended for machine-readable identity verification documents. The capacity offered by the HD-DataStripe is sufficient to store high quality biometric characteristics rather than extracted templates, in addition to the conventional bearer related data contained in a smart ID-card. It also eliminates the need for central database system (except for backup record) and other ex-pensive storage media, currently being used. While developing novel data-reading tech-nique for HD-DataStripe, to count for the unavoidable geometrical distortions, registra-tion marks pattern is chosen in such a way so that it results in accurate sampling points (a necessary condition for reliable data recovery at higher data encoding-rate). For more sophisticated distortions caused by the physical dot gain effects (intersymbol interfer-ence), the countermeasures such as application of sampling theorem, adaptive binariza-tion and post-data processing, each one of these providing only a necessary condition for reliable data recovery, are given. Finally, combining the various filters correspond-ing to these countermeasures, a novel Data-Reading technique for HD-DataStripe is given. The novel data-reading technique results in superior performance than the exist-ing techniques, intended for data recovery from printed media. In another scenario a small-size HD-DataStripe with maximum entropy is used as a copy detection pattern by utilizing information loss encountered at nearly maximum channel capacity. While considering the application of HD-DataStripe in hardcopy documents (contracts, official letters etc.), unlike existing work [Zha04], it allows one-to-one contents matching and does not depend on hash functions and OCR technology, constraints mainly imposed by the low data storage capacity offered by the existing analog media. For printed halftone images carrying hidden information higher capacity is mainly attributed to data-reading technique for HD-DataStripe that allows data recovery at higher printing resolution, a key requirement for a high quality watermarking technique in spatial domain. Digital halftoning and data encoding techniques are the other factors that contribute to data hiding technique given in this research. While considering security aspects, the new technique allows contents integrity and authenticity verification in the present scenario in which certain amount of errors are unavoidable, restricting the usage of existing techniques given for digital contents. Finally, a superposed constant background grayscale image, obtained by the repeated application of a specially designed small binary pattern, is used as channel for hidden communication and it allows up to 33 pages of A-4 size foreground text to be encoded in one CBGI. The higher capacity is contributed from data encoding symbols and data reading technique

    Study and Implementation of Watermarking Algorithms

    Get PDF
    Water Making is the process of embedding data called a watermark into a multimedia object such that watermark can be detected or extracted later to make an assertion about the object. The object may be an audio, image or video. A copy of a digital image is identical to the original. This has in many instances, led to the use of digital content with malicious intent. One way to protect multimedia data against illegal recording and retransmission is to embed a signal, called digital signature or copyright label or watermark that authenticates the owner of the data. Data hiding, schemes to embed secondary data in digital media, have made considerable progress in recent years and attracted attention from both academia and industry. Techniques have been proposed for a variety of applications, including ownership protection, authentication and access control. Imperceptibility, robustness against moderate processing such as compression, and the ability to hide many bits are the basic but rat..

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Machine learning based digital image forensics and steganalysis

    Get PDF
    The security and trustworthiness of digital images have become crucial issues due to the simplicity of malicious processing. Therefore, the research on image steganalysis (determining if a given image has secret information hidden inside) and image forensics (determining the origin and authenticity of a given image and revealing the processing history the image has gone through) has become crucial to the digital society. In this dissertation, the steganalysis and forensics of digital images are treated as pattern classification problems so as to make advanced machine learning (ML) methods applicable. Three topics are covered: (1) architectural design of convolutional neural networks (CNNs) for steganalysis, (2) statistical feature extraction for camera model classification, and (3) real-world tampering detection and localization. For covert communications, steganography is used to embed secret messages into images by altering pixel values slightly. Since advanced steganography alters the pixel values in the image regions that are hard to be detected, the traditional ML-based steganalytic methods heavily relied on sophisticated manual feature design have been pushed to the limit. To overcome this difficulty, in-depth studies are conducted and reported in this dissertation so as to move the success achieved by the CNNs in computer vision to steganalysis. The outcomes achieved and reported in this dissertation are: (1) a proposed CNN architecture incorporating the domain knowledge of steganography and steganalysis, and (2) ensemble methods of the CNNs for steganalysis. The proposed CNN is currently one of the best classifiers against steganography. Camera model classification from images aims at assigning a given image to its source capturing camera model based on the statistics of image pixel values. For this, two types of statistical features are designed to capture the traces left by in-camera image processing algorithms. The first is Markov transition probabilities modeling block-DCT coefficients for JPEG images; the second is based on histograms of local binary patterns obtained in both the spatial and wavelet domains. The designed features serve as the input to train support vector machines, which have the best classification performance at the time the features are proposed. The last part of this dissertation documents the solutions delivered by the author’s team to The First Image Forensics Challenge organized by the Information Forensics and Security Technical Committee of the IEEE Signal Processing Society. In the competition, all the fake images involved were doctored by popular image-editing software to simulate the real-world scenario of tampering detection (determine if a given image has been tampered or not) and localization (determine which pixels have been tampered). In Phase-1 of the Challenge, advanced steganalysis features were successfully migrated to tampering detection. In Phase-2 of the Challenge, an efficient copy-move detector equipped with PatchMatch as a fast approximate nearest neighbor searching method were developed to identify duplicated regions within images. With these tools, the author’s team won the runner-up prizes in both the two phases of the Challenge

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Attitudes towards old age and age of retirement across the world: findings from the future of retirement survey

    Get PDF
    The 21st century has been described as the first era in human history when the world will no longer be young and there will be drastic changes in many aspects of our lives including socio-demographics, financial and attitudes towards the old age and retirement. This talk will introduce briefly about the Global Ageing Survey (GLAS) 2004 and 2005 which is also popularly known as “The Future of Retirement”. These surveys provide us a unique data source collected in 21 countries and territories that allow researchers for better understanding the individual as well as societal changes as we age with regard to savings, retirement and healthcare. In 2004, approximately 10,000 people aged 18+ were surveyed in nine counties and one territory (Brazil, Canada, China, France, Hong Kong, India, Japan, Mexico, UK and USA). In 2005, the number was increased to twenty-one by adding Egypt, Germany, Indonesia, Malaysia, Poland, Russia, Saudi Arabia, Singapore, Sweden, Turkey and South Korea). Moreover, an additional 6320 private sector employers was surveyed in 2005, some 300 in each country with a view to elucidating the attitudes of employers to issues relating to older workers. The paper aims to examine the attitudes towards the old age and retirement across the world and will indicate some policy implications

    Preserving Trustworthiness and Confidentiality for Online Multimedia

    Get PDF
    Technology advancements in areas of mobile computing, social networks, and cloud computing have rapidly changed the way we communicate and interact. The wide adoption of media-oriented mobile devices such as smartphones and tablets enables people to capture information in various media formats, and offers them a rich platform for media consumption. The proliferation of online services and social networks makes it possible to store personal multimedia collection online and share them with family and friends anytime anywhere. Considering the increasing impact of digital multimedia and the trend of cloud computing, this dissertation explores the problem of how to evaluate trustworthiness and preserve confidentiality of online multimedia data. The dissertation consists of two parts. The first part examines the problem of evaluating trustworthiness of multimedia data distributed online. Given the digital nature of multimedia data, editing and tampering of the multimedia content becomes very easy. Therefore, it is important to analyze and reveal the processing history of a multimedia document in order to evaluate its trustworthiness. We propose a new forensic technique called ``Forensic Hash", which draws synergy between two related research areas of image hashing and non-reference multimedia forensics. A forensic hash is a compact signature capturing important information from the original multimedia document to assist forensic analysis and reveal processing history of a multimedia document under question. Our proposed technique is shown to have the advantage of being compact and offering efficient and accurate analysis to forensic questions that cannot be easily answered by convention forensic techniques. The answers that we obtain from the forensic hash provide valuable information on the trustworthiness of online multimedia data. The second part of this dissertation addresses the confidentiality issue of multimedia data stored with online services. The emerging cloud computing paradigm makes it attractive to store private multimedia data online for easy access and sharing. However, the potential of cloud services cannot be fully reached unless the issue of how to preserve confidentiality of sensitive data stored in the cloud is addressed. In this dissertation, we explore techniques that enable confidentiality-preserving search of encrypted multimedia, which can play a critical role in secure online multimedia services. Techniques from image processing, information retrieval, and cryptography are jointly and strategically applied to allow efficient rank-ordered search over encrypted multimedia database and at the same time preserve data confidentiality against malicious intruders and service providers. We demonstrate high efficiency and accuracy of the proposed techniques and provide a quantitative comparative study with conventional techniques based on heavy-weight cryptography primitives

    Towards 3D Scanning from Digital Images by Novice Users

    Get PDF
    The uptake of hobbyist 3D printers is being held back, in part, due to the barriers associated with creating a computer model to be printed. One way of creating such a computer model is to take a 3D scan of a pre-existing object using multiple digital images of the object showing the object from different points of view. This document details one way of doing this, with particular emphasis on camera calibration: the process of estimating camera parameters for the camera that took an image. In common calibration scenarios, multiple images are used where it is assumed that the internal parameters, such as zoom and focus settings, are fixed between images and the relative placement of the camera between images needs to be estimated. This is not ideal for a novice doing 3D scanning with a “point and shoot” camera where these internal parameters may not have been held fixed between images. A common coordinate system between images with a known relationship to real-world measurements is also desirable. Additionally, in some 3D scanning scenarios that use digital images, where it is expected that a trained individual will be doing the photography and internal settings can be held constant throughout the process, the images used for doing the calibration are different from those that are used to do the object capture. A technique has been developed to overcome these shortcomings. It uses a known printed sheet of paper, called the calibration sheet, that the object to be scanned sits on so that object acquisition and camera calibration can be done from the same image. Each image is processed independently with reference to the known size of the calibration sheet so the output is automatically to scale and minor camera calibration errors with one image do not propagate and affect estimates of camera calibration parameters for other images. The calibration process developed is also one that will work where large parts of the calibration sheet are obscured

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    corecore