53 research outputs found

    Intelligent watermarking of long streams of document images

    Get PDF
    Digital watermarking has numerous applications in the imaging domain, including (but not limited to) fingerprinting, authentication, tampering detection. Because of the trade-off between watermark robustness and image quality, the heuristic parameters associated with digital watermarking systems need to be optimized. A common strategy to tackle this optimization problem formulation of digital watermarking, known as intelligent watermarking (IW), is to employ evolutionary computing (EC) to optimize these parameters for each image, with a computational cost that is infeasible for practical applications. However, in industrial applications involving streams of document images, one can expect instances of problems to reappear over time. Therefore, computational cost can be saved by preserving the knowledge of previous optimization problems in a separate archive (memory) and employing that memory to speedup or even replace optimization for future similar problems. That is the basic principle behind the research presented in this thesis. Although similarity in the image space can lead to similarity in the problem space, there is no guarantee of that and for this reason, knowledge about the image space should not be employed whatsoever. Therefore, in this research, strategies to appropriately represent, compare, store and sample from problem instances are investigated. The objective behind these strategies is to allow for a comprehensive representation of a stream of optimization problems in a way to avoid re-optimization whenever a previously seen problem provides solutions as good as those that would be obtained by reoptimization, but at a fraction of its cost. Another objective is to provide IW systems with a predictive capability which allows replacing costly fitness evaluations with cheaper regression models whenever re-optimization cannot be avoided. To this end, IW of streams of document images is first formulated as the problem of optimizing a stream of recurring problems and a Dynamic Particle Swarm Optimization (DPSO) technique is proposed to tackle this problem. This technique is based on a two-tiered memory of static solutions. Memory solutions are re-evaluated for every new image and then, the re-evaluated fitness distribution is compared with stored fitness distribution as a mean of measuring the similarity between both problem instances (change detection). In simulations involving homogeneous streams of bi-tonal document images, the proposed approach resulted in a decrease of 95% in computational burden with little impact in watermarking performace. Optimization cost was severely decreased by replacing re-optimizations with recall to previously seen solutions. After that, the problem of representing the stream of optimization problems in a compact manner is addressed. With that, new optimization concepts can be incorporated into previously learned concepts in an incremental fashion. The proposed strategy to tackle this problem is based on Gaussian Mixture Models (GMM) representation, trained with parameter and fitness data of all intermediate (candidate) solutions of a given problem instance. GMM sampling replaces selection of individual memory solutions during change detection. Simulation results demonstrate that such memory of GMMs is more adaptive and can thus, better tackle the optimization of embedding parameters for heterogeneous streams of document images when compared to the approach based on memory of static solutions. Finally, the knowledge provided by the memory of GMMs is employed as a manner of decreasing the computational cost of re-optimization. To this end, GMM is employed in regression mode during re-optimization, replacing part of the costly fitness evaluations in a strategy known as surrogate-based optimization. Optimization is split in two levels, where the first one relies primarily on regression while the second one relies primarily on exact fitness values and provide a safeguard to the whole system. Simulation results demonstrate that the use of surrogates allows for better adaptation in situations involving significant variations in problem representation as when the set of attacks employed in the fitness function changes. In general lines, the intelligent watermarking system proposed in this thesis is well adapted for the optimization of streams of recurring optimization problems. The quality of the resulting solutions for both, homogeneous and heterogeneous image streams is comparable to that obtained through full optimization but for a fraction of its computational cost. More specifically, the number of fitness evaluations is 97% smaller than that of full optimization for homogeneous streams and 95% for highly heterogeneous streams of document images. The proposed method is general and can be easily adapted to other applications involving streams of recurring problems

    High Capacity Analog Channels for Smart Documents

    Get PDF
    Widely-used valuable hardcopy documents such as passports, visas, driving licenses, educational certificates, entrance-passes for entertainment events etc. are conventionally protected against counterfeiting and data tampering attacks by applying analog security technologies (e.g. KINEGRAMSÂź, holograms, micro-printing, UV/IR inks etc.). How-ever, easy access to high quality, low price modern desktop publishing technology has left most of these technologies ineffective, giving rise to high quality false documents. The higher price and restricted usage are other drawbacks of the analog document pro-tection techniques. Digital watermarking and high capacity storage media such as IC-chips, optical data stripes etc. are the modern technologies being used in new machine-readable identity verification documents to ensure contents integrity; however, these technologies are either expensive or do not satisfy the application needs and demand to look for more efficient document protection technologies. In this research three different high capacity analog channels: high density data stripe (HD-DataStripe), data hiding in printed halftone images (watermarking), and super-posed constant background grayscale image (CBGI) are investigated for hidden com-munication along with their applications in smart documents. On way to develop high capacity analog channels, noise encountered from printing and scanning (PS) process is investigated with the objective to recover the digital information encoded at nearly maximum channel utilization. By utilizing noise behaviour, countermeasures against the noise are taken accordingly in data recovery process. HD-DataStripe is a printed binary image similar to the conventional 2-D barcodes (e.g. PDF417), but it offers much higher data storage capacity and is intended for machine-readable identity verification documents. The capacity offered by the HD-DataStripe is sufficient to store high quality biometric characteristics rather than extracted templates, in addition to the conventional bearer related data contained in a smart ID-card. It also eliminates the need for central database system (except for backup record) and other ex-pensive storage media, currently being used. While developing novel data-reading tech-nique for HD-DataStripe, to count for the unavoidable geometrical distortions, registra-tion marks pattern is chosen in such a way so that it results in accurate sampling points (a necessary condition for reliable data recovery at higher data encoding-rate). For more sophisticated distortions caused by the physical dot gain effects (intersymbol interfer-ence), the countermeasures such as application of sampling theorem, adaptive binariza-tion and post-data processing, each one of these providing only a necessary condition for reliable data recovery, are given. Finally, combining the various filters correspond-ing to these countermeasures, a novel Data-Reading technique for HD-DataStripe is given. The novel data-reading technique results in superior performance than the exist-ing techniques, intended for data recovery from printed media. In another scenario a small-size HD-DataStripe with maximum entropy is used as a copy detection pattern by utilizing information loss encountered at nearly maximum channel capacity. While considering the application of HD-DataStripe in hardcopy documents (contracts, official letters etc.), unlike existing work [Zha04], it allows one-to-one contents matching and does not depend on hash functions and OCR technology, constraints mainly imposed by the low data storage capacity offered by the existing analog media. For printed halftone images carrying hidden information higher capacity is mainly attributed to data-reading technique for HD-DataStripe that allows data recovery at higher printing resolution, a key requirement for a high quality watermarking technique in spatial domain. Digital halftoning and data encoding techniques are the other factors that contribute to data hiding technique given in this research. While considering security aspects, the new technique allows contents integrity and authenticity verification in the present scenario in which certain amount of errors are unavoidable, restricting the usage of existing techniques given for digital contents. Finally, a superposed constant background grayscale image, obtained by the repeated application of a specially designed small binary pattern, is used as channel for hidden communication and it allows up to 33 pages of A-4 size foreground text to be encoded in one CBGI. The higher capacity is contributed from data encoding symbols and data reading technique

    A critical survey of the materials and techniques of Charles Henry Sims RA (1873-1928) with special reference to egg tempera media and works of art on paper

    Get PDF
    This thesis collates and provides new knowledge about the working practices and dissemination of materials and techniques of a leading Edwardian painter. Charles Sims RA (1873-1928) represents a neglected body of British artists who were responding to and assimilating certain new tendencies within early modernism yet at the same time were conscious and respectful of traditional practices and training methods. The study makes consistent reference to the extensive studio archive at Northumbria University whose existence has provided a unique opportunity to map Sims’ own informal working notes and observations, against the retrospective account Picture Making (1934) by his son, and instrumental and technical analyses performed on some works. The significance of this specific period in relation to the development of new materials and techniques, and the role instruction manuals and teaching played in developing Sims' stylistic and at times thematic approaches to practice are also discussed. Of particular interest are those which focus on drawing, watercolour and egg tempera techniques, media which perfectly suited Sims' temperament and arguably featured in and formed his best works. The thesis also aims to compare Sims' working practices with those of his better known contemporaries such as Augustus John, Philip Wilson Steer, William Orpen (all from the Slade) as well as members of the Tempera Revival movement. by crossreferencing reports held in national and international collections with hitherto unseen material. As a consequence the research will have a much wider application beyond the field of conservation, and will illuminate early 20th century artistic inheritance and intent

    Volume 5, Number 1 (1981)

    Get PDF
    4 News and Notes6 Ding Dong Daddy, by John Sommers7 Fluorescent Inks: Color Phenomena for Lithography, by William Walmsley8 My Ten Years in Lithography, Part I, by Bolton Brown, with Introduction and Notes by Clinton Adams25 Positive-Working Plates: Further Comment, by William Lagattuta, with Susan von Glahn27 Information Exchange, by John Sommers32 Directory of Supplier

    Mechanisms of controlling colour and aesthetic appearance of the photographic salt print

    Get PDF
    The salt print is an important part of photography, both in its historic value and in the tonal range it can provide. This tonal range is greater than any other photographic printing process available to date attributed to the inherent masking ability of the metallic silver. However the intrinsic production problems have made it a 'forgotten' process. There are five key problems. 1. The difficulties in achieving the potential extensive tonal range. 2. The varying colour of the print. 3. Staining that appears in the print, during and after processing. 4. Instability and longevity of the salt print. 5. Contradictory and inaccurate information in material published on the salt print. Although the emphasis of the research is on exploring and controlling the colour and tonal range, the staining problems and stability of the print are also addressed. The materials used for contact negatives today vary in both capture and output, from analogue film processed in the traditional wet darkroom to a variety of transparent film printed from digital files. Inadequate density and tonal range can affect all types of negatives. To provide sufficient exposure time for the salt prints extended tonal range adjustments to the negative were necessary. These long exposures then converted sufficient silver salts to the image making metallic silver, utilising the intrinsic self-masking process. Ultimately this research has uncovered ways to control colour and tonal range and certain aesthetic qualities of the salt print, while simultaneously resolving some of the conflicts in published information. Accurate and consistent methods of processing eliminate staining, providing some stability to the print. The activities and steps carried out to make a salt print are manual; precise duplication is therefore almost unattainable. Nevertheless, although tests on a densitometer may display numeric differences, visual differences are barely noticeable

    Variations and Application Conditions Of the Data Type »Image« - The Foundation of Computational Visualistics

    Get PDF
    Few years ago, the department of computer science of the University Magdeburg invented a completely new diploma programme called 'computational visualistics', a curriculum dealing with all aspects of computational pictures. Only isolated aspects had been studied so far in computer science, particularly in the independent domains of computer graphics, image processing, information visualization, and computer vision. So is there indeed a coherent domain of research behind such a curriculum? The answer to that question depends crucially on a data structure that acts as a mediator between general visualistics and computer science: the data structure "image". The present text investigates that data structure, its components, and its application conditions, and thus elaborates the very foundations of computational visualistics as a unique and homogenous field of research. Before concentrating on that data structure, the theory of pictures in general and the definition of pictures as perceptoid signs in particular are closely examined. This includes an act-theoretic consideration about resemblance as the crucial link between image and object, the communicative function of context building as the central concept for comparing pictures and language, and several modes of reflection underlying the relation between image and image user. In the main chapter, the data structure "image" is extendedly analyzed under the perspectives of syntax, semantics, and pragmatics. While syntactic aspects mostly concern image processing, semantic questions form the core of computer graphics and computer vision. Pragmatic considerations are particularly involved with interactive pictures but also extend to the field of information visualization and even to computer art. Four case studies provide practical applications of various aspects of the analysis

    On Rearing an Ugly Head: Joel-Peter Witkin and the Mysticism of the “Ugly Aesthetic”

    Get PDF
    The contemporary photographer, Joel-Peter Witkin, has described his remaking of some of the most iconic paintings in the history of art as a “divine revolt”. However, there are no attempts to unravel the meaning of this project nor to analyse the visual changes that Witkin has made. This thesis argues that Witkin's re-creations serve to subvert the negation or diminishment of ugliness in art history's depictions of the mystical, and to present the experience of ugliness as alternatively inherently Godly. Through engaging in the problems in philosophical aesthetics, it contrasts the notions “aesthetically ugly” (a quality that cannot be objectively identified and studied because it ascribes aesthetic non-worth) with the “ugly aesthetic”, which refers to the “perceptive-felt” experience of an object. By integrating descriptions of this experience of the ugly aesthetic with those of the early development stage of the “psychoanalytic pre-symbolic”, it provides heuristics with which to identify perceptual identifiers ugly objects, ugly worlds and the expression of ugly feelings in mystical invocations of paintings of three chosen art historical periods and Witkin's recreations. In his reconstructing of the heavenly realms given Renaissance paintings of Leda and the Swan (1510-1515) and The Birth of Venus (1485), Witkin makes a “pre-symbolic” space with ugly objects to present a contrary vision of an ugly dwelling place for God. In amending the Catholic Baroque's Little Fur (1638) and the Protestant Baroque's Still Life of Game, Fish, Fruit and Kitchen Utensils (1646), the artist replaces mystical feelings that imbue scenes of ugly objects with an expression of ugly feelings themselves, thereby guiding the viewer into a full immersion into these objects the real site of Godly experience instead. This theoretical formulation and its application to the works at hand, evidence that Witkin's work points to the mystical power of the ugly aesthetic to unleash a personal and collective memory of Godly reality as ontologically formless and mysterious, and thereby makes a case for ugliness' value

    The portrait drawings of Hans Holbein the Younger: function and use explored through materials and techniques

    Get PDF
    This thesis examines the materials and techniques of sixteenth century artist Hans Holbein the Younger, with particular reference to his portrait drawings. The research reinstates the drawings as the primary source-material for investigation, thereby demonstrating the link between the materials and techniques chosen by Holbein, and the function or end-use of the drawings. Although around one hundred Holbein portrait drawings survive, the focus of this research is the eighteen that relate to currently attributed oil and miniature paintings. By focusing the research in this manner, it is possible to establish how Holbein constructed and used the drawings in the preparation of the finished oil painting. Furthermore, it explains how his choice and use of materials and techniques can help to establish the original context and function of the drawings. An important outcome of this research is a detailed description of the eighteen drawings that relate to a painted portrait. Having developed an effective method of examining and describing Holbein’s drawings, this research provides a thorough analysis of the materials and techniques used by him. This not only increases our understanding of his drawing processes, but also broadens the limitations of traditional connoisseurship by offering a more accessible tool, allowing objective visual analysis of an artist’s technique. This method of investigation can be applied to drawings in a wider context of sixteenth century artistic production. Moreover, it can also be used as a potential model for how to effectively ‘read’ a drawing in order to better understand its function and method of production. The results inform art historical and conservation research. A comprehensive, systematic visual examination of the drawings has helped to reveal new information on Holbein’s methods and materials, and offers insights into 16th century workshop practice. In many cases examination has clarified the sequence in which the media was laid down. Holbein’s emphasis on the contours that define sitters’ features has been much disputed, and their role, media, and application methods were unclear. What has previously been described as metalpoint marks were discovered by the author to be indentations, which have become filled with loose media, thereby giving the appearance of a drawn line. The indentations actually show evidence of tracing of the salient lines that capture likeness for transfer. The research has also revealed that red chalk was the preliminary media for defining features, and that Holbein developed standardised techniques for rendering flesh tones, making the drawing process more efficient. It is apparent that Holbein chose techniques to fulfill a particular role, and that there are clear links between these techniques and their location on a drawing

    Volume 8 (1985)

    Get PDF
    4 Editor\u27s Note, by Clinton Adams5 The Tamarind Citation: Gustave von Groschwitz6 Théophile Steinlen and Louis Legrand: Contrasts in Social Ideology, by Gabriel P. Weisberg15 Unsung Heroes: Barnett Freedman, by Pat Gilmour25 Tamarind in Canada, by Charlotte Baxter31 The Scottish Printmaking Workshops, by Marjorie Devon34 Life and Work: Thoughts of an Artist-Printer, by Irwin Hollander, Clinton Adams, and Gustave von Groschwitz44 Artist and Printer: Some Matches are Made in Heaven and Others..., by Leonard Lehrer50 Into the Crystal Ball: The Future of Lithography, a Panel Discussion61 Information Exchange, by John Sommers68 Brown, Pennell, and Whistler: Eradicating Errors and Presenting a Non-Partisan View, by Nicholas Smale70 Books & Catalogues in Review72 Directory of Supplier
    • 

    corecore