1,319 research outputs found

    The LaTeX project: A case study of open-source software

    Get PDF
    This is a case study of TeX, a typesetting software that was developed by Donald E. Knuth in the late 70's. Released with an open source license, it has become a reference in scientific publishing. TeX is now used to typeset and publish much of the world's scientific literature in physics and mathematics. This case study is part of a wider effort by academics to understand the open-source phenomenon. That development model is similar to the organization of the production of knowledge in academia; there is no set organization with a hierarchy, but free collaboration that is coordinated spontaneously and winds up generating complex products that are the property of all who can understand its functioning. The case study was led by gathering qualitative data via interviews with TeX developers and quantitative data on the TeX community -- the program's code, the software that is part of the TeX distribution, the newsgroups dedicated to the software, and many other indicators of the evolution and activity in that open-source project. The case study is aimed at economists who want to develop models to understand and analyze the open-source phenomenon. It is also geared towards policy-makers who would like to encourage or regulate open- source, and towards open-source developers who wonder what are the efficient strategies to make an open-source project successful.TeX, LaTeX, case study, open source, software, innovation, organisational structure, economic history, knowledge production, knowledge diffusion.

    Competition between open-source and proprietary software: the (La)TeX case study

    Get PDF
    The paper examines competition between two development models, proprietary and open-source (``OS''). It first defines and compares those two models and then analyzes the influence the development of one type of software has on the development of the other. The paper is based on the (La)TeX case study. In that case study, the features, users, and patterns in the development of the (La)TeX software were compared to its proprietary equivalents. The models that are presented in this paper describe some aspects of the strategic interactions between proprietary and open-source software. The paper shows that they cannot be analyzed independently; the decisions of one class of agents (OSS developers) are affected by those of the other class of agents (private entrepreneurs).Open source, software, proprietary software, BSD, GPL, public domain, intellectual production, licensing, patents, TeX, LaTeX

    An Investigation into the profitability of a prepress operations in digital printing when using the Xeikon DCP-32/D print engine

    Get PDF
    Digital printing technologies introduced in 1994 opened new possibilities for printing high quality low quantity four color documents. Documents could be manufactured faster and at a more affordable cost than ever before. Along with the introduction of digital presses such as the Xeikon DCP- 1 and Indigo E-Print 1000 came the expectation that this new printing technology would grow at a highly accelerated rate. To the disappointment of many companies that invested in this technology, growth has been much slower than anticipated. Lack of market growth is attributed to an uninformed client base, one unable to understand how this technology can service its needs. When clients require only 50 to 1000 copies of a document, the issue of profitability exists. Digital presses are certainly more efficient with run lengths that are shorter since there are no films or plates. They require minimal make ready such as mounting plates and running the press until optimal inking of the substrate. Unlike conventional printing workflow, proofs are generated on the digital press when required. The inefficiencies in digital printing reside in the prepress function since the workflow is the same as traditional prepress with the exception of the proofing process. This is due to the high apportioned cost of prepress in relation to the short run length when compared to traditional printing. The purpose of this paper was to develop and show how an operational model predicts the profitability of prepress operations when digital printing with a Xeikon engine. Experience indicates that digital printing can be worthwhile despite the cost of prepress. The initial model assumed that all documents introduced into the work flow were press ready. Due to the variety of difficulties encountered in prepress production, a more accurate model could only be created after recording and analyzing relevant prepress data. The revised model provided better insight into prepress efficiency and profitability of digital printing using a Xeikon engine with some interesting results. Prepress operations including printing is responsible for about 25% of the selling cost well under the typical 40% goal set by industry standard. Since the average impressions run was over 550, additional time required to insure jobs submitted are press ready that even doubled the prepress time has little impact on the final cost of a job. This is due to the efficiency of the tools used to prepare jobs. The average length of time to prepare a job is 20 minutes, therefore, the use of 40 minutes would typically add less than seven dollars to the manufacturing cost of production. When prepress requires corrections the cost of prepress increases by less than half a percent, therefore, prepress is efficient since the cost of typical additional labor do not dramatically effect profitability

    Revisiting a summer vacation: digital restoration and typesetter forensics

    Get PDF
    In 1979 the Computing Science Research Center (‘Center 127’) at Bell Laboratories bought a Linotron 202 typesetter from the Mergenthaler company. This was a ‘third generation’ digital machine that used a CRT to image characters onto photographic paper. The intent was to use existing Linotype fonts and also to develop new ones to exploit the 202’s line-drawing capabilities. Use of the 202 was hindered by Mergenthaler’s refusal to reveal the inner structure and encoding mechanisms of the font files. The particular 202 was further dogged by extreme hardware and software unreliability. A memorandum describing the experience was written in early 1980 but was deemed to be too “sensitive” to release. The original troff input for the memorandum exists and now, more than 30 years later, the memorandum can be released. However, the only available record of its visual appearance was a poor-quality scanned photocopy of the original printed version. This paper details our efforts in rebuilding a faithful retypeset replica of the original memorandum, given that the Linotron 202 disappeared long ago, and that this episode at Bell Labs occurred 5 years before the dawn of PostScript (and later PDF) as de facto standards for digital document preservation. The paper concludes with some lessons for digital archiving policy drawn from this rebuilding exercise

    A Newsletter for MegaSource

    Get PDF

    In-house Preparation of Examination Papers using troff, tbl, and eqn.

    Get PDF
    Starting in December 1982 the University of Nottingham decided to phototypeset almost all of its examination papers `in house' using the troff, tbl and eqn programs running under UNIX. This tutorial lecture highlights the features of the three programs with particular reference to their strengths and weaknesses in a production environment. The following issues are particularly addressed: Standards -- all three software packages require the embedding of commands and the invocation of pre-written macros, rather than `what you see is what you get'. This can help to enforce standards, in the absence of traditional compositor skills. Hardware and Software -- the requirements are analysed for an inexpensive preview facility and a low-level interface to the phototypesetter. Mathematical and Technical papers -- the fine-tuning of eqn to impose a standard house style. Staff skills and training -- systems of this kind do not require the operators to have had previous experience of phototypesetting. Of much greater importance is willingness and flexibility in learning how to use computer systems

    The use of synthesized images to evaluate the performance of Ocr devices and algorithms

    Full text link
    This thesis will attempt to establish if synthesized images can be used to predict the performance of Optical Character Recognition (OCR) algorithms and devices. The value of this research lies in reducing the considerable costs associated with preparing test images for OCR research. The paper reports on a series of experiments in which synthesized images of text files in nine different fonts and sizes are input to eight commercial OCR devices. The method used to create the images is explained and a detailed analysis of the character and word confusion between the output and the true text files is presented. The synthesized images are then printed and scanned to mechanically introduce noise . The resulting images are also input to the devices and analysis performed. A high correlation was found between the output from the printed and scanned images and the output from real world images

    The Design and Use of a Multiple-Alphabet Font with Ω

    Get PDF
    International audienceThe Ω project aims to offer open and flexible means for typesetting different scripts. By working at several different levels, it is possible to offer natural support for different languages and scripts, and strictly respect typographical traditions for each of them. This is illustrated with a large PostScript Type 1 font for the commonly used left-to-right non-cursive alphabets, called omlgc (Ω Latin-Greek-Cyrillic). This font, which more than covers the Unicode sections pertaining to those alphabets, as well as those of IPA, Armenian, Georgian and Tifinagh (Berber), is built-virtually-out of smaller glyph banks. The Ω typesetting engine, based on that of TeX, is used to print documents using this font. The characters can be accessed either directly, or through the use of filters, called Ω Type-setting Processes (ΩTPs), which are applied to the input stream
    • 

    corecore