15 research outputs found
Legibility of perceptually-tuned grayscale fonts
Perceptually-tuned grayscale fonts are generated from character outline descriptions by applying to them a set of modifications specifically conceived for strengthening thin character parts, obtaining well-contrasted bars and preserving important relationships between character shape parts. The present study aims at comparing the legibility of perceptually-tuned grayscale and bilevel display fonts at small and very small sizes (6, 8 and 10 pt) The study confirms the results of previous studies indicating that reading speed is to a large extent independent of the typography (bilevel or grayscale) and the font size. However, perceptually-tuned grayscale characters perform better than bilevel characters for an italic string search task in a meaningless text. Regarding the subjective preferences of the test subjects, perceptually-tuned grayscale fonts at 8 and 10 point sizes received a superior rating than bilevel fonts at the same size
Legibility of condensed perceptually-tuned grayscale fonts
The authors analyze the quality of condensed text on LCD displays, generated with unhinted and hinted bilevel characters, with traditional anti-aliased and with perceptually-tuned grayscale characters. Hinted bi-level characters and perceptually-tuned grayscale characters improve the quality of displayed small size characters (8pt, 6pt) up to a line condensation factor of 80%. At higher condensation factors, the text becomes partly illegible. In such situations, traditional anti-aliased grayscale character seems to be the most robust variant. They explore the utility of perceptually-tuned grayscale fonts for improving the legibility of condensed text. A small advantage was found for text searching, compared to bilevel fonts. This advantage is consistent with human vision models applied to readin
Perceptually-tuned grayscale characters based on parametrisable component fonts
Our component-based parametrisable font system is a newly developed font description and reproduction technology. It incorporates for each basic character shape a software method responsible for the synthesis of an instance of that character. A given font is synthesized by providing appropriate font parameters to these character synthesis methods. Numerous concrete fonts can be derived by simply varying the parameters. Such variations offer high flexibility for synthesizing derived fonts (variations in condensation, weight and contrast) and enable saving a considerable amount of storage space. We show that with component-based parametrisable fonts, high quality perceptually-tuned grayscale characters can be generated without requiring hinting information. Generating perceptually-tuned grayscale characters with parametrized component-based fonts consists in automatically adapting the phase of some of the character's parameters in respect to the underlying grid and in ensuring that thin character parts are strong enough not to disappear (weight-control). The presented method is especially powerful for generating high-quality characters on LCD displays (cellular phones, pen-computers, electronic books, etc..
Human interaction with digital ink : legibility measurement and structural analysis
Literature suggests that it is possible to design and implement pen-based computer
interfaces that resemble the use of pen and paper. These interfaces appear to
allow users freedom in expressing ideas and seem to be familiar and easy to use.
Different ideas have been put forward concerning this type of interface, however
despite the commonality of aims and problems faced, there does not appear to be
a common approach to their design and implementation.
This thesis aims to progress the development of pen-based computer interfaces
that resemble the use of pen and paper. To do this, a conceptual model is proposed
for interfaces that enable interaction with "digital ink". This conceptual model is
used to organize and analyse the broad range of literature related to pen-based
interfaces, and to identify topics that are not sufficiently addressed by published
research. Two issues highlighted by the model: digital ink legibility and digital
ink structuring, are then investigated.
In the first investigation, methods are devised to objectively and subjectively
measure the legibility of handwritten script. These methods are then piloted in
experiments that vary the horizontal rendering resolution of handwritten script
displayed on a computer screen. Script legibility is shown to decrease with rendering
resolution, after it drops below a threshold value.
In the second investigation, the clustering of digital ink strokes into words is
addressed. A method of rating the accuracy of clustering algorithms is proposed:
the percentage of words spoiled. The clustering error rate is found to vary among
different writers, for a clustering algorithm using the geometric features of both
ink strokes, and the gaps between them.
The work contributes a conceptual interface model, methods of measuring
digital ink legibility, and techniques for investigating stroke clustering features, to
the field of digital ink interaction research
AutoGraff: towards a computational understanding of graffiti writing and related art forms.
The aim of this thesis is to develop a system that generates letters and pictures with a style that is immediately recognizable as graffiti art or calligraphy. The proposed system can be used similarly to, and in tight integration with, conventional computer-aided geometric design tools and can be used to generate synthetic graffiti content for urban environments in games and in movies, and to guide robotic or fabrication systems that can materialise the output of the system with physical drawing media. The thesis is divided into two main parts. The first part describes a set of stroke primitives, building blocks that can be combined to generate different designs that resemble graffiti or calligraphy. These primitives mimic the process typically used to design graffiti letters and exploit well known principles of motor control to model the way in which an artist moves when incrementally tracing stylised letter forms. The second part demonstrates how these stroke primitives can be automatically recovered from input geometry defined in vector form, such as the digitised traces of writing made by a user, or the glyph outlines in a font. This procedure converts the input geometry into a seed that can be transformed into a variety of calligraphic and graffiti stylisations, which depend on parametric variations of the strokes
Adaptive Methods for Robust Document Image Understanding
A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current defficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexity- and threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy
Recommended from our members
Words Matter: The Work of Lawrence Weiner
This dissertation explores the practice of contemporary artist Lawrence Weiner. From 1968 onwards, Weiner has presented his work using language and, as such, the artist is historically regarded as one of the pioneering practitioners of Conceptual art. The artist himself categorically refuses that designation, preferring to focus on the material aspects of his work. Nevertheless, his oeuvre has been largely received in terms of a predominantly linguistic intervention. Craig Dworkin encapsulates this position, when in discussing the Conceptual wager of Weiner's statements he writes: "Having tested the propositions that the art object might be nominal, linguistic, invisible, and on a par with its abstract initial description, the next step was to venture that it could be dispensed with altogether." By focusing equally on the linguistic and material aspects of Weiner's practice, this dissertation argues, conversely, that Weiner's work is primarily an object strategy, and not a dematerialized linguistic presentation. The first part of this discussion deals with Weiner's ground-breaking work from the mid 1960s to the early 1970s, analyzing the full implications of Weiner's extraordinary decision to present materials through language. Close comparisons are drawn with the profoundly materialist practices of contemporary artists such as Robert Rauschenberg, Carl Andre, Richard Serra and Robert Smithson. Weiner's use of language is also distinguished from the text-based works of Conceptual artists Joseph Kosuth and Douglas Huebler, problematizing the degree to which Weiner's statements can stand as an exemplar of postmodern textuality, inasmuch as their referential content remains of primary consequence. Several chapters of the dissertation focus on drawings, and in particular the artist's notebooks, an aspect of Weiner's practice that has remained largely unstudied. Crucially, the notebooks present a model of thinking which is wholly corporeal as opposed to purely analytical. Furthermore, they raise the problem of the visual in relation to a body of work that has been credited with the suppression of a traditional (optical) aesthetic. In being conceived by the artist as "maps," Weiner's drawings also invite an analysis of spatial considerations, and are thus linked to the artist's own designation of his work, not as art in general, but specifically as sculpture. Finally, the notebooks, like Weiner's films, practically dissolve the categories of reality and fiction. Indeed, Weiner himself would insist that every presentation of his essentially "realist" work is nonetheless inherently "theatrical." One of the long-standing criticisms of Conceptual art was that while it made aspects of circulation and distribution part of the work - thereby testing the limits of institutional constraint and expanding art's potential to engage in collective reception - it failed to achieve truly democratic access, in large part by neglecting issues of desire. Thus, Conceptual art's promise of collective accessibility was purportedly foreclosed by an art whose theoretical propositions lacked a democratic content. In closely considering the generic content of Weiner's work, this dissertation develops a picture not only of the concrete relationship between word and thing, but of the ways in which Weiner uses signs (drawings, text, films) to "objectify" desire, demonstrating that his "sculptures" must be seen as both conceptual and sensual, fully immersed in politicized questions of imaginary and bodily experience
Legibility of Condensed Perceptually-Tuned Grayscale Fonts
Abstract. We analyze the quality of condensed text on LCD displays, generated with unhinted and hinted bilevel characters, with traditional anti-aliased and with perceptually-tuned grayscale characters. Hinted bi-level characters and perceptually-tuned grayscale characters improve the quality of displayed small size characters (8pt, 6pt) up to a line condensation factor of 80%. At higher condensation factors, the text becomes partly illegible. In such situations, traditional anti-aliased grayscale character seems to be the most robust variant. We explore the utility of perceptually-tuned grayscale fonts for improving the legibility of condensed text. A small advantage was found for text searching, compared to bilevel fonts. This advantage is consistent with human vision models applied to reading.
Music Encoding Conference Proceedings 2021, 19–22 July, 2021 University of Alicante (Spain): Onsite & Online
Este documento incluye los artĂculos y pĂłsters presentados en el Music Encoding Conference 2021 realizado en Alicante entre el 19 y el 22 de julio de 2022.Funded by project Multiscore, MCIN/AEI/10.13039/50110001103