50 research outputs found

    BUGS and UPDATES for the TeX Live CD-ROM (version 6)

    Get PDF

    The inaugural meeting of TUG India

    Get PDF

    TUG 2000: Call for papers

    Get PDF

    The TeX Live Manual, 5th Edition

    Get PDF
    summary:This article presents a translation of the TeX Live manual into the Slovak language

    The LaTeX Web companion: integrating TEX, HTML, and XML

    Get PDF

    Challenging the Myth of Presentation in Digital Editions

    Get PDF
    Are the data of an edition means to a particular and privileged presentation, or is the presentation a side effect? Because of the changing nature of computer systems, with constant progression in hardware and software, the encoded texts are the most important long-term outcome of the project—the representation of the knowledge— and presentation within a particular application is destined to become obsolete relatively quickly. However, it is most often the presentation output, rather than the source data, which is published and shared. We believe this is largely because there is currently no way of expressing, in the source encoding, aspects of presentation which are seen by editors as a crucial part of their work. Given a framework for encoding processing expectations for a variety of output formats, editors would be much more inclined to share the encoded files as their prime output, and intentions for presentation would be much more likely to survive repeated technology transitions as processing tools develop and change. We believe the collision between the individuality of research and the quest for common tools that aid in the creation of digital editions will be solved not by creating another piece of specialized publishing software but rather by creating a general framework for processing TEI documents and similar, modular solutions for other tasks in the publishing workflow. Such an abstraction layer admittedly still requires some fluency in computer technologies, but far less than for setting up a publication system from scratch in a general-purpose programming language

    Tracr: Compiled Transformers as a Laboratory for Interpretability

    Full text link
    We show how to "compile" human-readable programs into standard decoder-only transformer models. Our compiler, Tracr, generates models with known structure. This structure can be used to design experiments. For example, we use it to study "superposition" in transformers that execute multi-step algorithms. Additionally, the known structure of Tracr-compiled models can serve as ground-truth for evaluating interpretability methods. Commonly, because the "programs" learned by transformers are unknown it is unclear whether an interpretation succeeded. We demonstrate our approach by implementing and examining programs including computing token frequencies, sorting, and parenthesis checking. We provide an open-source implementation of Tracr at https://github.com/google-deepmind/tracr.Comment: Presented at NeurIPS 2023 (Spotlight
    corecore