22 research outputs found

    High Capacity Analog Channels for Smart Documents

    Get PDF
    Widely-used valuable hardcopy documents such as passports, visas, driving licenses, educational certificates, entrance-passes for entertainment events etc. are conventionally protected against counterfeiting and data tampering attacks by applying analog security technologies (e.g. KINEGRAMS®, holograms, micro-printing, UV/IR inks etc.). How-ever, easy access to high quality, low price modern desktop publishing technology has left most of these technologies ineffective, giving rise to high quality false documents. The higher price and restricted usage are other drawbacks of the analog document pro-tection techniques. Digital watermarking and high capacity storage media such as IC-chips, optical data stripes etc. are the modern technologies being used in new machine-readable identity verification documents to ensure contents integrity; however, these technologies are either expensive or do not satisfy the application needs and demand to look for more efficient document protection technologies. In this research three different high capacity analog channels: high density data stripe (HD-DataStripe), data hiding in printed halftone images (watermarking), and super-posed constant background grayscale image (CBGI) are investigated for hidden com-munication along with their applications in smart documents. On way to develop high capacity analog channels, noise encountered from printing and scanning (PS) process is investigated with the objective to recover the digital information encoded at nearly maximum channel utilization. By utilizing noise behaviour, countermeasures against the noise are taken accordingly in data recovery process. HD-DataStripe is a printed binary image similar to the conventional 2-D barcodes (e.g. PDF417), but it offers much higher data storage capacity and is intended for machine-readable identity verification documents. The capacity offered by the HD-DataStripe is sufficient to store high quality biometric characteristics rather than extracted templates, in addition to the conventional bearer related data contained in a smart ID-card. It also eliminates the need for central database system (except for backup record) and other ex-pensive storage media, currently being used. While developing novel data-reading tech-nique for HD-DataStripe, to count for the unavoidable geometrical distortions, registra-tion marks pattern is chosen in such a way so that it results in accurate sampling points (a necessary condition for reliable data recovery at higher data encoding-rate). For more sophisticated distortions caused by the physical dot gain effects (intersymbol interfer-ence), the countermeasures such as application of sampling theorem, adaptive binariza-tion and post-data processing, each one of these providing only a necessary condition for reliable data recovery, are given. Finally, combining the various filters correspond-ing to these countermeasures, a novel Data-Reading technique for HD-DataStripe is given. The novel data-reading technique results in superior performance than the exist-ing techniques, intended for data recovery from printed media. In another scenario a small-size HD-DataStripe with maximum entropy is used as a copy detection pattern by utilizing information loss encountered at nearly maximum channel capacity. While considering the application of HD-DataStripe in hardcopy documents (contracts, official letters etc.), unlike existing work [Zha04], it allows one-to-one contents matching and does not depend on hash functions and OCR technology, constraints mainly imposed by the low data storage capacity offered by the existing analog media. For printed halftone images carrying hidden information higher capacity is mainly attributed to data-reading technique for HD-DataStripe that allows data recovery at higher printing resolution, a key requirement for a high quality watermarking technique in spatial domain. Digital halftoning and data encoding techniques are the other factors that contribute to data hiding technique given in this research. While considering security aspects, the new technique allows contents integrity and authenticity verification in the present scenario in which certain amount of errors are unavoidable, restricting the usage of existing techniques given for digital contents. Finally, a superposed constant background grayscale image, obtained by the repeated application of a specially designed small binary pattern, is used as channel for hidden communication and it allows up to 33 pages of A-4 size foreground text to be encoded in one CBGI. The higher capacity is contributed from data encoding symbols and data reading technique

    Robustness of a DFT based image watermarking method against am halftoning

    Get PDF
    U ovom radu je evaluirana otpornost na rastriranje metode označavanja slika bazirane na diskretnoj Fourierovoj transformaciji (DFT). Rastriranje se koristi za reprodukciju višetonskih slika. U istraživanju je korišten set od 1000 slika. Za rastriranje su korištena tri različita oblika rasterskog elementa (točka, elipsa i linija) i 5 različitih linijatura (10, 13, 15, 40 i 60 lin/cm). Evaluirana je vjerojatnost detekcije i distribucija postignutih vrijednosti detekcije. Rezultati su pokazali da je ispitivana metoda označavanja slika otporna na rastriranje linijaturama većim od 15 lin/cm. Također, zaključeno je da oblik rasterskog elementa ima slab utjecaj na stupanj detekcije.In this paper the robustness of a Discrete Fourier Transform (DFT) based image watermarking scheme to amplitude modulation (AM) halftoning is evaluated. Halftoning is used for reproduction of continuous images. Thus, it is important that a watermarking method is robust to halftoning. Three different shapes of clustered dots of AM (Amplitude Modulation) halftones are used (round, ellipse and line) with five different halftone frequencies (10, 13, 15, 40, and 60 line/cm). The tests where done on a dataset of 1000 images. As the metric of robustness, watermark detection rate, distribution of detection values, and ROC (Receiver Operation Characteristic) curves were used. The results showed that the watermarking scheme is robust to halftoning for halftone frequencies greater than 15 line/cm. Also, the type of AM halftone used has almost no effect on a detection rate

    Robustness of a DFT based image watermarking method against am halftoning

    Get PDF
    U ovom radu je evaluirana otpornost na rastriranje metode označavanja slika bazirane na diskretnoj Fourierovoj transformaciji (DFT). Rastriranje se koristi za reprodukciju višetonskih slika. U istraživanju je korišten set od 1000 slika. Za rastriranje su korištena tri različita oblika rasterskog elementa (točka, elipsa i linija) i 5 različitih linijatura (10, 13, 15, 40 i 60 lin/cm). Evaluirana je vjerojatnost detekcije i distribucija postignutih vrijednosti detekcije. Rezultati su pokazali da je ispitivana metoda označavanja slika otporna na rastriranje linijaturama većim od 15 lin/cm. Također, zaključeno je da oblik rasterskog elementa ima slab utjecaj na stupanj detekcije.In this paper the robustness of a Discrete Fourier Transform (DFT) based image watermarking scheme to amplitude modulation (AM) halftoning is evaluated. Halftoning is used for reproduction of continuous images. Thus, it is important that a watermarking method is robust to halftoning. Three different shapes of clustered dots of AM (Amplitude Modulation) halftones are used (round, ellipse and line) with five different halftone frequencies (10, 13, 15, 40, and 60 line/cm). The tests where done on a dataset of 1000 images. As the metric of robustness, watermark detection rate, distribution of detection values, and ROC (Receiver Operation Characteristic) curves were used. The results showed that the watermarking scheme is robust to halftoning for halftone frequencies greater than 15 line/cm. Also, the type of AM halftone used has almost no effect on a detection rate

    ИНТЕЛЛЕКТУАЛЬНЫЙ числовым программным ДЛЯ MIMD-компьютер

    Get PDF
    For most scientific and engineering problems simulated on computers the solving of problems of the computational mathematics with approximately given initial data constitutes an intermediate or a final stage. Basic problems of the computational mathematics include the investigating and solving of linear algebraic systems, evaluating of eigenvalues and eigenvectors of matrices, the solving of systems of non-linear equations, numerical integration of initial- value problems for systems of ordinary differential equations.Для більшості наукових та інженерних задач моделювання на ЕОМ рішення задач обчислювальної математики з наближено заданими вихідними даними складає проміжний або остаточний етап. Основні проблеми обчислювальної математики відносяться дослідження і рішення лінійних алгебраїчних систем оцінки власних значень і власних векторів матриць, рішення систем нелінійних рівнянь, чисельного інтегрування початково задач для систем звичайних диференціальних рівнянь.Для большинства научных и инженерных задач моделирования на ЭВМ решение задач вычислительной математики с приближенно заданным исходным данным составляет промежуточный или окончательный этап. Основные проблемы вычислительной математики относятся исследования и решения линейных алгебраических систем оценки собственных значений и собственных векторов матриц, решение систем нелинейных уравнений, численного интегрирования начально задач для систем обыкновенных дифференциальных уравнений

    ID Photograph hashing : a global approach

    No full text
    This thesis addresses the question of the authenticity of identity photographs, part of the documents required in controlled access. Since sophisticated means of reproduction are publicly available, new methods / techniques should prevent tampering and unauthorized reproduction of the photograph. This thesis proposes a hashing method for the authentication of the identity photographs, robust to print-and-scan. This study focuses also on the effects of digitization at hash level. The developed algorithm performs a dimension reduction, based on independent component analysis (ICA). In the learning stage, the subspace projection is obtained by applying ICA and then reduced according to an original entropic selection strategy. In the extraction stage, the coefficients obtained after projecting the identity image on the subspace are quantified and binarized to obtain the hash value. The study reveals the effects of the scanning noise on the hash values of the identity photographs and shows that the proposed method is robust to the print-and-scan attack. The approach focusing on robust hashing of a restricted class of images (identity) differs from classical approaches that address any imageCette thèse traite de la question de l’authenticité des photographies d’identité, partie intégrante des documents nécessaires lors d’un contrôle d’accès. Alors que les moyens de reproduction sophistiqués sont accessibles au grand public, de nouvelles méthodes / techniques doivent empêcher toute falsification / reproduction non autorisée de la photographie d’identité. Cette thèse propose une méthode de hachage pour l’authentification de photographies d’identité, robuste à l’impression-lecture. Ce travail met ainsi l’accent sur les effets de la numérisation au niveau de hachage. L’algorithme mis au point procède à une réduction de dimension, basée sur l’analyse en composantes indépendantes (ICA). Dans la phase d’apprentissage, le sous-espace de projection est obtenu en appliquant l’ICA puis réduit selon une stratégie de sélection entropique originale. Dans l’étape d’extraction, les coefficients obtenus après projection de l’image d’identité sur le sous-espace sont quantifiés et binarisés pour obtenir la valeur de hachage. L’étude révèle les effets du bruit de balayage intervenant lors de la numérisation des photographies d’identité sur les valeurs de hachage et montre que la méthode proposée est robuste à l’attaque d’impression-lecture. L’approche suivie en se focalisant sur le hachage robuste d’une classe restreinte d’images (d’identité) se distingue des approches classiques qui adressent une image quelconqu

    Joceli Mayer

    Get PDF

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Digital Image Segmentation and On–line Print Quality Diagnostics

    Get PDF
    During the electrophotographic (EP) process for a modern laser printer, object-oriented halftoning is sometimes used which renders an input raster page with different halftone screen frequencies according to an object map; this approach can reduce the print artifacts for the smooth areas as well as preserve the fine details of a page. Object map can be directly extracted from the page description language (PDL), but most of the time, it is not correctly generated. For the first part of this thesis, we introduce a new object generation algorithm that generates an object map from scratch purely based on a raster image. The algorithm is intended for ASIC application. To achieve hardware friendliness and memory efficiency, the algorithm only buffers two strips of an image at a time for processing. A novel two-pass connected component algorithm is designed that runs through all the pixels in raster order, collect features and classify components on the fly, and recycle unused components to save memories for future strips. The algorithm is finally implemented as a C program. For 10 test pages, with the similar quality of object maps generated, the number of connected components used can be reduced by over 97% on average compared to the classic two-pass connected component which buffers a whole page of pixels. The novelty of the connected component algorithm used here for document segmentation can also be potentially used for wide variety of other applications. The second part of the thesis proposes a new way to diagnose print quality. Compared to the traditional diagnostics of print quality which prints a specially designed test page to be examined by an expert or against a user manual, our proposed system could automatically diagnose a customer’s printer without any human interference. The system relies on scanning printouts from user’s printer. Print defects such as banding, streaking, etc. will be reflected on its scanned page and can be captured by comparing to its master image; the master image is the digitally generated original from which the page is printed. Once the print quality drops below a specified acceptance criteria level, the system can notify a user of the presence of print quality issues. Among so many print defects, color fading – caused by the low toner in the cartridge – is the focus of this work. Our image processing pipeline first uses a feature based image registration algorithm to align the scanned page with the master page spatially and then calculates the color difference of different color clusters between the scanned page and the master page. At last, it will predict which cartridge is depleted

    Bayesian Dictionary Learning for Single and Coupled Feature Spaces

    Get PDF
    Over-complete bases offer the flexibility to represent much wider range of signals with more elementary basis atoms than signal dimension. The use of over-complete dictionaries for sparse representation has been a new trend recently and has increasingly become recognized as providing high performance for applications such as denoise, image super-resolution, inpaiting, compression, blind source separation and linear unmixing. This dissertation studies the dictionary learning for single or coupled feature spaces and its application in image restoration tasks. A Bayesian strategy using a beta process prior is applied to solve both problems. Firstly, we illustrate how to generalize the existing beta process dictionary learning method (BP) to learn dictionary for single feature space. The advantage of this approach is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. Next, we propose a new beta process joint dictionary learning method (BP-JDL) for coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Two applications, single image super-resolution and inverse halftoning, are chosen to evaluate the performance of the proposed Bayesian approach. In both cases, the Bayesian approach, either for single feature space or coupled feature spaces, outperforms state-of-the-art methods in comparative domains
    corecore