276 research outputs found

    Perceptually-tuned grayscale characters based on parametrisable component fonts

    Get PDF
    Our component-based parametrisable font system is a newly developed font description and reproduction technology. It incorporates for each basic character shape a software method responsible for the synthesis of an instance of that character. A given font is synthesized by providing appropriate font parameters to these character synthesis methods. Numerous concrete fonts can be derived by simply varying the parameters. Such variations offer high flexibility for synthesizing derived fonts (variations in condensation, weight and contrast) and enable saving a considerable amount of storage space. We show that with component-based parametrisable fonts, high quality perceptually-tuned grayscale characters can be generated without requiring hinting information. Generating perceptually-tuned grayscale characters with parametrized component-based fonts consists in automatically adapting the phase of some of the character's parameters in respect to the underlying grid and in ensuring that thin character parts are strong enough not to disappear (weight-control). The presented method is especially powerful for generating high-quality characters on LCD displays (cellular phones, pen-computers, electronic books, etc..

    Font Rasterization, the State of Art

    Get PDF
    Modern personal computers and workstations enable text, graphics and images to be visualized in a resolution independent manner. Office documents can be visualized and printed in the same way on displays, page-printers and photocomposers. Personal computers like the PC and the MacIntosh incorporate advanced rasterization algorithms for the rendering of outline characters and graphics. In the nineties, advanced workstations will provide facilities for the generation of finely tuned gray-scale characters. This tutorial provides a survey of the basic algorithms for representing and rendering outline characters. Fast scan-conversion and filling algorithms as well as basic and advanced character outline grid-fitting techniques are presented. The philosophy and functionality of Adobe's Type 1 and Apple's TrueType typographic rendering systems are discussed

    Digital document imaging systems: An overview and guide

    Get PDF
    This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs

    Reconstructing vectorised photographic images

    Get PDF
    We address the problem of representing captured images in the continuous mathematical space more usually associated with certain forms of drawn ('vector') images. Such an image is resolution-independent so can be used as a master for varying resolution-specific formats. We briefly describe the main features of a vectorising codec for photographic images, whose significance is that drawing programs can access images and image components as first-class vector objects. This paper focuses on the problem of rendering from the isochromic contour form of a vectorised image and demonstrates a new fill algorithm which could also be used in drawing generally. The fill method is described in terms of level set diffusion equations for clarity. Finally we show that image warping is both simplified and enhanced in this form and that we can demonstrate real histogram equalisation with genuinely rectangular histograms

    Legibility of condensed perceptually-tuned grayscale fonts

    Get PDF
    The authors analyze the quality of condensed text on LCD displays, generated with unhinted and hinted bilevel characters, with traditional anti-aliased and with perceptually-tuned grayscale characters. Hinted bi-level characters and perceptually-tuned grayscale characters improve the quality of displayed small size characters (8pt, 6pt) up to a line condensation factor of 80%. At higher condensation factors, the text becomes partly illegible. In such situations, traditional anti-aliased grayscale character seems to be the most robust variant. They explore the utility of perceptually-tuned grayscale fonts for improving the legibility of condensed text. A small advantage was found for text searching, compared to bilevel fonts. This advantage is consistent with human vision models applied to readin

    Analysis of Digital Logic Schematics Using Image Recognition

    Get PDF
    This thesis presents the results of research in the area of automated recognition of digital logic schematics. The adaptation of a number of existing image processing techniques for use with this kind of image is discussed, and the concept of using sets of tokens to represent the overall drawing i s explained in detail. Methods are given for using tokens to describe schematic component shapes, to represent the connections between components, and to provide sufficient information to a parser so that an equation can be generated. A Microsoft Windows-based test program which runs under Windows 95 or Windows NT has been written to implement the ideas presented. This program accepts either scanned images of digital schematics, or computer-generated images in Microsoft Windows bitmap format as input. It analyzes the input schematic image for content, and produces a corresponding logical equation as output. It also provides the functionality necessary to build and maintain an image token library

    Text Detection in Natural Scenes and Technical Diagrams with Convolutional Feature Learning and Cascaded Classification

    Get PDF
    An enormous amount of digital images are being generated and stored every day. Understanding text in these images is an important challenge with large impacts for academic, industrial and domestic applications. Recent studies address the difficulty of separating text targets from noise and background, all of which vary greatly in natural scenes. To tackle this problem, we develop a text detection system to analyze and utilize visual information in a data driven, automatic and intelligent way. The proposed method incorporates features learned from data, including patch-based coarse-to-fine detection (Text-Conv), connected component extraction using region growing, and graph-based word segmentation (Word-Graph). Text-Conv is a sliding window-based detector, with convolution masks learned using the Convolutional k-means algorithm (Coates et. al, 2011). Unlike convolutional neural networks (CNNs), a single vector/layer of convolution mask responses are used to classify patches. An initial coarse detection considers both local and neighboring patch responses, followed by refinement using varying aspect ratios and rotations for a smaller local detection window. Different levels of visual detail from ground truth are utilized in each step, first using constraints on bounding box intersections, and then a combination of bounding box and pixel intersections. Combining masks from different Convolutional k-means initializations, e.g., seeded using random vectors and then support vectors improves performance. The Word-Graph algorithm uses contextual information to improve word segmentation and prune false character detections based on visual features and spatial context. Our system obtains pixel, character, and word detection f-measures of 93.14%, 90.26%, and 86.77% respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems, and producing highly accurate text detection masks at the pixel level. To investigate the utility of our feature learning approach for other image types, we perform tests on 8- bit greyscale USPTO patent drawing diagram images. An ensemble of Ada-Boost classifiers with different convolutional features (MetaBoost) is used to classify patches as text or background. The Tesseract OCR system is used to recognize characters in detected labels and enhance performance. With appropriate pre-processing and post-processing, f-measures of 82% for part label location, and 73% for valid part label locations and strings are obtained, which are the best obtained to-date for the USPTO patent diagram data set used in our experiments. To sum up, an intelligent refinement of convolutional k-means-based feature learning and novel automatic classification methods are proposed for text detection, which obtain state-of-the-art results without the need for strong prior knowledge. Different ground truth representations along with features including edges, color, shape and spatial relationships are used coherently to improve accuracy. Different variations of feature learning are explored, e.g. support vector-seeded clustering and MetaBoost, with results suggesting that increased diversity in learned features benefit convolution-based text detectors

    Photorealistic Texturing for Modern Video Games

    Get PDF
    Simulating realism has become a standard for many games in the industry. While real-time rendering requires considerable rendering resources, texturing defines the physical parameters of the surfaces with a lower computer power. The objective of this thesis was to study the evolution of Texture Mapping and define a workflow for approaching a photorealism with modern instruments for video game production. All the textures were created with the usage of Agisoft Photoscan, Substance Designer & Paintrer, Abode Photoshop and Pixologic Zbrush. With the aid of both the theory and practical approaches, this thesis explores the questions of how the textures are used and which applications can help to build them for a better result. Each workflow is introduced with the main points of their purposes as the author’s suggestion, which can be used as a guideline for many companies, including Ringtail Studios OÜ. In conclusion, the thesis summarizes the outcome of the textures and their workflow. The results are successfully established by the author with attendance to introduce methods for the material production

    Energy-Based Evaluation of Digital Halftones

    Get PDF
    The purpose of this study was to determine the validity of the energy measure developed by Geist, Reynolds, and Suggs, when used as an evaluator of digitally half-toned images. The energy measure was found to be a valid, useful tool for the evaluation of binary digital halftone quality. Data resulting from the analysis and visual comparison of fifteen different halftones supports this conclusion. Using linear regression, the coefficient of correlation between the energy measure and visual quality ratings was -0.606 using all images, and -0.936 using average results for each halftone method. These figures indicate the strong relationship between image energy and image quality. Although the energy measure was found to be accurate for different halftones of the same continuous-tone image, there is an inherent difficulty when comparing the quality of halftones of different image content. Geist, Reynold, and Suggs\u27 algorithm does not produce values within a fixed range. A simple approximation for normalizing the energy values is proposed and used for the study, but further development is needed to obtain absolute quality rankings using this technique

    Interactive topographic web mapping using scalable vector graphics

    Get PDF
    Large scale topographic maps portray detailed information about the landscape. They are used for a wide variety o f purposes. USGS large scale topographic maps at 1:24,000 have been traditionally distributed in paper form. With the advent of the Internet, these maps can now be distributed electronically. Instead of common raster format presentation, the solution presented here is based on a vector approach. The vector format provides many advantages compared to the use of a raster-based presentation. This research shows that Scalable Vector Graphics (SVG) is a promising technology for delivering high quality interactive topographic maps via the Internet, both in terms o f graphic quality and interactivity. A possible structure for the SVG map document is proposed. Interactive features such as toggling thematic layers on and off, UTM coordinate readout for x, y, and z (elevation) were developed as well. Adding this type of interactivity can help to better extract information from a topographic map. A focus group analysis with the online SVG topographic map shows a high-level of user acceptance
    • …
    corecore