10 research outputs found

    Dithered Color Quantization

    Get PDF
    Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure. In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous cost–function approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real–world images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches

    A subjective evaluation of texture synthesis methods

    Get PDF
    This paper presents the results of a user study which quantifies the relative and absolute quality of example-based texture synthesis algorithms. In order to allow such evaluation, a list of texture properties is compiled, and a minimal representative set of textures is selected to cover these. Six texture synthesis methods are compared against each other and a reference on a selection of twelve textures by non-expert participants (N = 67). Results demonstrate certain algorithms successfully solve the problem of texture synthesis for certain textures, but there are no satisfactory results for other types of texture properties. The presented textures and results make it possible for future work to be subjectively compared, thus facilitating the development of future texture synthesis methods

    High quality texture synthesis

    Get PDF
    Texture synthesis is a core process in Computer Graphics and design. It is used extensively in a wide range of applications, including computer games, virtual environments, manufacturing, and rendering. This thesis investigates a novel approach to texture synthesis in order to significantly improve speed, memory requirements, and quality. An analysis of texture properties is created, to enable the gathering a representative dataset, and a qualitative evaluation of texture synthesis algorithms. A new algorithm to make non-repeating texture synthesis on-the-fly possible is developed, tested, and evaluated. This parallel patch-based method allows repeatable sampling without cache, without creating visually noticeable repetitions, as confirmed by a perceptive objective study on quality. In order to quantify the quality of existing algorithms and to facilitate further development in the field, desired texture properties are classified and analysed, and a minimal set of textures is created according to these properties to allow subjective evaluation of texture synthesis algorithms. This dataset is then used in a user study which evaluates the quality of texture synthesis algorithms. For the first time in the field of texture synthesis, statistically significant findings quantify the quality of selected repeatable algorithms, and make it possible to evaluate new improved methods. Finally, in an effort to make these findings applicable in the British tile manufacturing industry, the developed texture synthesis technology is made available to Johnson Tiles

    Text Segmentation in Web Images Using Colour Perception and Topological Features

    Get PDF
    The research presented in this thesis addresses the problem of Text Segmentation in Web images. Text is routinely created in image form (headers, banners etc.) on Web pages, as an attempt to overcome the stylistic limitations of HTML. This text however, has a potentially high semantic value in terms of indexing and searching for the corresponding Web pages. As current search engine technology does not allow for text extraction and recognition in images, the text in image form is ignored. Moreover, it is desirable to obtain a uniform representation of all visible text of a Web page (for applications such as voice browsing or automated content analysis). This thesis presents two methods for text segmentation in Web images using colour perception and topological features. The nature of Web images and the implicit problems to text segmentation are described, and a study is performed to assess the magnitude of the problem and establish the need for automated text segmentation methods. Two segmentation methods are subsequently presented: the Split-and-Merge segmentation method and the Fuzzy segmentation method. Although approached in a distinctly different way in each method, the safe assumption that a human being should be able to read the text in any given Web Image is the foundation of both methods’ reasoning. This anthropocentric character of the methods along with the use of topological features of connected components, comprise the underlying working principles of the methods. An approach for classifying the connected components resulting from the segmentation methods as either characters or parts of the background is also presented

    Blur perception: An evaluation of focus measures

    Get PDF
    Since the middle of the 20th century the technological development of conventional photographic cameras has taken advantage of the advances in electronics and signal processing. One speci c area that has bene ted from these developments is that of auto-focus, the ability for a cameras optical arrangement to be altered so as to ensure the subject of the scene is in focus. However, whilst the precise focus point can be known for a single point in a scene, the method for selecting a best focus for the entire scene is an unsolved problem. Many focus algorithms have been proposed and compared, though no overall comparison between all algorithms has been made, nor have the results been compared with human observers. This work describes a methodology that was developed to benchmark focus algorithms against human results. Experiments that capture quantitative metrics about human observers were developed and conducted with a large set of observers on a diverse range of equipment. From these experiments, it was found that humans were highly consensual in their experimental responses. The human results were then used as a benchmark, against which equivalent experiments were performed by each of the candidate focus algorithms. A second set of experiments, conducted in a controlled environment, captured the underlying human psychophysical blur discrimination thresholds in natural scenes. The resultant thresholds were then characterised and compared against equivalent discrimination thresholds obtained by using the candidate focus algorithms as automated observers. The results of this comparison and how this should guide the selection of an auto-focus algorithm are discussed, with comment being passed on how focus algorithms may need to change to cope with future imaging techniques

    Generative Mesh Modeling

    Get PDF
    Generative Modeling is an alternative approach for the description of three-dimensional shape. The basic idea is to represent a model not as usual by an agglomeration of geometric primitives (triangles, point clouds, NURBS patches), but by functions. The paradigm change from objects to operations allows for a procedural representation of procedural shapes, such as most man-made objects. Instead of storing only the result of a 3D construction, the construction process itself is stored in a model file. The generative approach opens truly new perspectives in many ways, among others also for 3D knowledge management. It permits for instance to resort to a repository of already solved modeling problems, in order to re-use this knowledge also in different, slightly varied situations. The construction knowledge can be collected in digital libraries containing domain-specific parametric modeling tools. A concrete realization of this approach is a new general description language for 3D models, the "Generative Modeling Language" GML. As a Turing-complete "shape programming language" it is a basis of existing, primitv based 3D model formats. Together with its Runtime engine the GML permits - to store highly complex 3D models in a compact form, - to evaluate the description within fractions of a second, - to adaptively tesselate and to interactively display the model, - and even to change the models high-level parameters at runtime.Die generative Modellierung ist ein alternativer Ansatz zur Beschreibung von dreidimensionaler Form. Zugrunde liegt die Idee, ein Modell nicht wie üblich durch eine Ansammlung geometrischer Primitive (Dreiecke, Punkte, NURBS-Patches) zu beschreiben, sondern durch Funktionen. Der Paradigmenwechsel von Objekten zu Geometrie-erzeugenden Operationen ermöglicht es, prozedurale Modelle auch prozedural zu repräsentieren. Statt das Resultat eines 3D-Konstruktionsprozesses zu speichern, kann so der Konstruktionsprozess selber repräsentiert werden. Der generative Ansatz eröffnet unter anderem gänzlich neue Perspektiven für das Wissensmanagement im 3D-Bereich. Er ermöglicht etwa, auf einen Fundus bereits gelöster Konstruktions-Aufgaben zurückzugreifen, um sie in ähnlichen, aber leicht variierten Situationen wiederverwenden zu können. Das Konstruktions-Wissen kann dazu in Form von Bibliotheken parametrisierter, Domänen-spezifischer Modellier-Werkzeuge gesammelt werden. Konkret wird dazu eine neue allgemeine Modell-Beschreibungs-Sprache vorgeschlagen, die "Generative Modeling Language" GML. Als Turing-mächtige "Programmiersprache für Form" stellt sie eine echte Verallgemeinerung existierender Primitiv-basierter 3D-Modellformate dar. Zusammen mit ihrer Runtime-Engine erlaubt die GML, - hochkomplexe 3D-Objekte extrem kompakt zu beschreiben, - die Beschreibung innerhalb von Sekundenbruchteilen auszuwerten, - das Modell adaptiv darzustellen und interaktiv zu betrachten, - und die Modell-Parameter interaktiv zu verändern

    Dithered Color Quantization

    No full text

    (Guest Editors) Dithered Color Quantization

    No full text
    Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure. In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous cost–function approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real–world images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches. 1

    EUROGRAPHICS ’98 / N. Ferreira and M. Göbel (Guest Editors) Dithered Color Quantization

    No full text
    Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure. In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous cost–function approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real–world images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches. 1
    corecore