14 research outputs found

    Neugebauer and Demichel: dependence and independence in n-screen superpositions for colour printing

    Get PDF
    The Neugebauer equations and the Demichel equations on which they are based are one of the basic tools for modeling colour printing systems that use the halftoning technique. However, these equations implicitly assume that the colour ink distributions in the screen superposition are statistically independent. We show that this condition is not satisfied in the conventional screen superposition used for colour printing, and we discuss the consequences of this fact. Furthermore, we give a precise criterion that determines, for any number of superposed regular screens, in which cases the Demichel (and hence the Neugebauer) equations are satisfied and in which cases they fail: The Demichel equations fail in all cases where the screen superposition is singular, and they are satisfied in all nonsingular screen superpositions. We illustrate our results with several examples of both case

    Halftoning for Multi-Channel Printing : Algorithm Development, Implementation and Verification

    Full text link

    Reproducing color images using custom inks

    Get PDF
    We investigate the general problem of reproducing color images on an offset press using custom inks in any combination and number. While this problem has been explored previously for the case of two inks, there are a number of new mathematical and algorithmic challenges that arise as the number of inks increases. These challenges include more complex gamut mapping strategies, more efficient ink selection strategies, and fast and numerically accurate methods for computing ink separations in situations that may be either over- or under-constrained. In addition, the demands of high-quality color printing require an accurate physical model of the colors that result from overprinting multiple inks using halftoning, including the effects of trapping, dot gain, and the interreflection of light between ink layers. In this paper, we explore these issues related to printing with multiple custom inks, and address them with new algorithms and physical models. Finally, we present some printed examples demonstrating the promise of our methods

    Digital halftoning and the physical reconstruction function

    Get PDF
    Originally presented as author's thesis (Ph. D.--Massachusetts Institute of Technology), 1986.Bibliography: p. 397-405."This work has been supported by the Digital Equipement Corporation."by Robert A. Ulichney

    Analysis of the superposition of periodic layers and their Moire effects through the algebraic structure of their Fourier spectrum

    Get PDF
    A new approach is presented for investigating the superposition of any number of periodic structures, and the Moire effects which may result. This approach, which is based on an algebraic analysis of the Fourier-spectrum using concepts from the theory of geometry of numbers, fully explains the properties of the superposition of periodic layers and of their Moire effects. It provides the fundamental notations and tools for investigating, both in the spectral domain and in the image domain, properties of the superposition as a whole (such as periodicity or almost-periodicity), and properties of each of the individual Moire generated in the superposition (such as their profile forms and intensity levels, their singular states, etc.). This new, rather unexpected combination of Fourier theory and geometry of numbers proves very useful, and it offers a profound insight into the structure of the spectrum of the layer superposition and the corresponding properties back in the image domai

    Image quality analysis of the reproductions of black and white photographs obtained from a desktop publishing system

    Get PDF
    This research project was directed to characterize the variables that govern the black and white reproduction process in a desktop publishing enviroment (DTP), which in this case is formed by: a Macintosh II computer, a Mycrotek 300A scanner as the inputing device and a LaserWriter as the outputting device. The specific goal of this research was to find the ideal settings of cell size and resolution for each type of black and white photographic image that would produce the best reproduction possible from the system. Two different experiments were performed, the first one using a gray scale and a resolution target as originals, showed the influence of the variables over the tone reproduction curve and the resolution of the system. The second test used three different images as originals, and their corresponding reproductions were rated by a selected group of judges, using the pair comparison technology for the evaluation. In addition to obtaining the best settings for each image, an analysis of the application of information theory to image evaluation was performed. The results found that the actual mathematical model needs to incorporate other factors such as the visual response of a human observer, to be considered as a valuable tool for quality assesment of a reproduction system

    Hardware-accelerated algorithms in visual computing

    Get PDF
    This thesis presents new parallel algorithms which accelerate computer vision methods by the use of graphics processors (GPUs) and evaluates them with respect to their speed, scalability, and the quality of their results. It covers the fields of homogeneous and anisotropic diffusion processes, diffusion image inpainting, optic flow, and halftoning. In this turn, it compares different solvers for homogeneous diffusion and presents a novel \u27extended\u27 box filter. Moreover, it suggests to use the fast explicit diffusion scheme (FED) as an efficient and flexible solver for nonlinear and in particular for anisotropic parabolic diffusion problems on graphics hardware. For elliptic diffusion-like processes, it recommends to use cascadic FED or Fast Jacobi schemes. The presented optic flow algorithm represents one of the fastest yet very accurate techniques. Finally, it presents a novel halftoning scheme which yields state-of-the-art results for many applications in image processing and computer graphics.Diese Arbeit prĂ€sentiert neue parallele Algorithmen zur Beschleunigung von Methoden in der Bildinformatik mittels Grafikprozessoren (GPUs), und evaluiert diese im Hinblick auf Geschwindigkeit, Skalierungsverhalten, und QualitĂ€t der Resultate. Sie behandelt dabei die Gebiete der homogenen und anisotropen Diffusionsprozesse, Inpainting (BildvervollstĂ€ndigung) mittels Diffusion, die Bestimmung des optischen Flusses, sowie Halbtonverfahren. Dabei werden verschiedene Löser fĂŒr homogene Diffusion verglichen und ein neuer \u27erweiterter\u27 Mittelwertfilter prĂ€sentiert. Ferner wird vorgeschlagen, das schnelle explizite Diffusionsschema (FED) als effizienten und flexiblen Löser fĂŒr parabolische nichtlineare und speziell anisotrope Diffusionsprozesse auf Grafikprozessoren einzusetzen. FĂŒr elliptische diffusionsartige Prozesse wird hingegen empfohlen, kaskadierte FED- oder schnelle Jacobi-Verfahren einzusetzen. Der vorgestellte Algorithmus zur Berechnung des optischen Flusses stellt eines der schnellsten und dennoch Ă€ußerst genauen Verfahren dar. Schließlich wird ein neues Halbtonverfahren prĂ€sentiert, das in vielen Bereichen der Bildverarbeitung und Computergrafik Ergebnisse produziert, die den Stand der Technik reprĂ€sentieren

    Portal-s: High-resolution real-time 3D video telepresence

    Get PDF
    The goal of telepresence is to allow a person to feel as if they are present in a location other than their true location; a common application of telepresence is video conferencing in which live video of a user is transmitted to a remote location for viewing. In conventional two-dimensional (2D) video conferencing, loss of correct eye gaze commonly occurs, due to a disparity between the capture and display optical axes. Newer systems are being developed which allow for three-dimensional (3D) video conferencing, circumventing issues with this disparity, but new challenges are arising in the capture, delivery, and redisplay of 3D contents across existing infrastructure. To address these challenges, a novel system is proposed which allows for 3D video conferencing across existing networks while delivering full resolution 3D video and establishing correct eye gaze. During the development of Portal-s, many innovations to the field of 3D scanning and its applications were made; specifically, this dissertation research has achieved the following innovations: a technique to realize 3D video processing entirely on a graphics processing unit (GPU), methods to compress 3D videos on a GPU, and combination of the aforementioned innovations with a special holographic display hardware system to enable the novel 3D telepresence system entitled Portal-s. The first challenge this dissertation addresses is the cost of real-time 3D scanning technology, both from a monetary and computing power perspective. New advancements in 3D scanning and computation technology are continuing to increase, simplifying the acquisition and display of 3D data. These advancements are allowing users new methods of interaction and analysis of the 3D world around them. Although the acquisition of static 3D geometry is becoming easy, the same cannot be said of dynamic geometry, since all aspects of the 3D processing pipeline, capture, processing, and display, must be realized in real-time simultaneously. Conventional approaches to solve these problems utilize workstation computers with powerful central processing units (CPUs) and GPUs to accomplish the large amounts of processing power required for a single 3D frame. A challenge arises when trying to realize real-time 3D scanning on commodity hardware such as a laptop computer. To address the cost of a real-time 3D scanning system, an entirely parallel 3D data processing pipeline that makes use of a multi-frequency phase-shifting technique is presented. This novel processing pipeline can achieve simultaneous 3D data capturing, processing, and display at 30 frames per second (fps) on a laptop computer. By implementing the pipeline within the OpenGL Shading Language (GLSL), nearly any modern computer with a dedicated graphics device can run the pipeline. Making use of multiple threads sharing GPU resources and direct memory access transfers, high frame rates on low compute power devices can be achieved. Although these advancements allow for low compute power devices such as a laptop to achieve real-time 3D scanning, this technique is not without challenges. The main challenge being selecting frequencies that allow for high quality phase, yet do not include phase jumps in equivalent frequencies. To address this issue, a new modified multi-frequency phase shifting technique was developed that allows phase jumps to be introduced in equivalent frequencies yet unwrapped in parallel, increasing phase quality and reducing reconstruction error. Utilizing these techniques, a real-time 3D scanner was developed that captures 3D geometry at 30 fps with a root mean square error (RMSE) of 0:00081 mm for a measurement area of 100 mm X 75 mm at a resolution of 800 X 600 on a laptop computer. With the above mentioned pipeline the CPU is nearly idle, freeing it to perform additional tasks such as image processing and analysis. The second challenge this dissertation addresses is associated with delivering huge amounts of 3D video data in real-time across existing network infrastructure. As the speed of 3D scanning continues to increase, and real-time scanning is achieved on low compute power devices, a way of compressing the massive amounts of 3D data being generated is needed. At a scan resolution of 800 X 600, streaming a 3D point cloud at 30 frames per second (FPS) would require a throughput of over 1.3 Gbps. This amount of throughput is large for a PCIe bus, and too much for most commodity network cards. Conventional approaches involve serializing the data into a compressible state such as a polygon file format (PLY) or Wavefront object (OBJ) file. While this technique works well for structured 3D geometry, such as that created with computer aided drafting (CAD) or 3D modeling software, this does not hold true for 3D scanned data as it is inherently unstructured. A challenge arises when trying to compress this unstructured 3D information in such a way that it can be easily utilized with existing infrastructure. To address the need for real-time 3D video compression, new techniques entitled Holoimage and Holovideo are presented, which have the ability to compress, respectively, 3D geometry and 3D video into 2D counterparts and apply both lossless and lossy encoding. Similar to the aforementioned 3D scanning pipeline, these techniques make use of a completely parallel pipeline for encoding and decoding; this affords high speed processing on the GPU, as well as compression before streaming the data over the PCIe bus. Once in the compressed 2D state, the information can be streamed and saved until the 3D information is needed, at which point 3D geometry can be reconstructed while maintaining a low amount of reconstruction error. Further enhancements of the technique have allowed additional information, such as texture information, to be encoded by reducing the bit rate of the data through image dithering. This allows both the 3D video and associated 2D texture information to be interlaced and compressed into 2D video, synchronizing the streams automatically. The third challenge this dissertation addresses is achieving correct eye gaze in video conferencing. In 2D video conferencing, loss of correct eye gaze commonly occurs, due to a disparity between the capture and display optical axes. Conventional approaches to mitigate this issue involve either reducing the angle of disparity between the axes by increasing the distance of the user to the system, or merging the axes through the use of beam splitters. Newer approaches to this issue make use of 3D capture and display technology, as the angle of disparity can be corrected through transforms of the 3D data. Challenges arise when trying to create such novel systems, as all aspects of the pipeline, capture, transmission, and redisplay must be simultaneously achieved in real-time with the massive amounts of 3D data. Finally, the Portal-s system is presented, which is an integration of all the aforementioned technologies into a holistic software and hardware system that enables real-time 3D video conferencing with correct mutual eye gaze. To overcome the loss of eye contact in conventional video conferencing, Portal-s makes use of dual structured-light scanners that capture through the same optical axis as the display. The real-time 3D video frames generated on the GPU are then compressed using the Holovideo technique. This allows the 3D video to be streamed across a conventional network or the Internet, and redisplayed at a remote node for another user on the Holographic display glass. Utilizing two connected Portal-s nodes, users of the systems can engage in 3D video conferencing with natural eye gaze established. In conclusion, this dissertation research substantially advances the field of real-time 3D scanning and its applications. Contributions of this research span into both academic and industrial practices, where the use of this information has allowed users new methods of interaction and analysis of the 3D world around them

    CAPS--Computer-aided plastic surgery

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 1992.Includes bibliographical references (leaves 166-173).by Steven Donald Pieper.Ph.D
    corecore