20 research outputs found

    Threaded intermediate code /

    Get PDF

    Theory and practice in the construction of efficient interpreters

    Get PDF
    Various characteristics of a programming language, or of the hardware on which it is to be implemented, may make interpretation a more attractive implementation technique than compilation into machine instructions. Many interpretive techniques can be employed; this thesis is mainly concerned with an efficient and flexible technique using a form of interpretive code known as indirect threaded code (ITC). An extended example of its use is given by the Setl-s implementation of Setl, a programming language based on mathematical set theory. The ITC format, in which pointers to system routines are embedded in the code, is described and its extension to cope with polymorphic operators. The operand formats and some of the system routines are described in detail to illustrate the effect of the language design on the interpreter. Setl must be compiled into indirect threaded code and its elaborate syntax demands the use of a sophisticated parser. In Setl-s an LR(1) parser is implemented as a data structure which is interpreted in a way resembling that in which ITC is interpreted at runtime. Qualitative and quantitative aspects of the compiler, interpreter and system as a whole are discussed. The semantics of a language can be defined mathematically using denotational semantics. By setting up a suitable domain structure, it is possible to devise a semantic definition which embodies the essential features of ITC. This definition can be related, on the one hand to the standard semantics of the language, and on the other to its implementation as an ITC-based interpreter. This is done for a simple language known as X10. Finally, an indication is given of how this approach could be extended to describe Setl-s, and of the insight gained from such a description. Some possible applications of the theoretical analysis in the building of ITC-based interpreters are suggested

    Pistol.

    Get PDF

    Performance analysis and optimizations of the ArchC simulators

    Get PDF
    Orientadores: Edson Borin, Rodolfo Jardim de AzevedoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Geração automática possui a grande vantagem de automatizar um processo, reduzir o tempo que seria gasto nesta etapa e evitar que erros comuns aconteçam. Porém, de que adianta reduzir o tempo de uma etapa se existe a possibilidade de aumentar o tempo das demais etapas. Em projetos de circuitos digitais, foram desenvolvidas as linguagens de descrição de arquitetura, que possibilitaram o surgimento de ferramentas capazes de gerar automaticamente simuladores, compiladores, etc., que são utilizados para avaliar uma arquitetura sem que esta tenha um hardware propriamente dito. Simuladores gerados automaticamente são utilizados para executar aplicações e averiguar o comportamento destas e da arquitetura sendo projetada. No entanto, caso o simulador gerado não seja eficiente, o tempo de simulação aumenta, podendo superar o ganho obtido pela geração automática, cancelando suas vantagens. Neste caso, como verificar a eficiência do simulador gerado? Uma forma bastante usada é comparar com outros simuladores existentes ou gerar o simulador manualmente para comparação. Comparar com simuladores existentes exigem que estes sejam similares, já gerar manualmente o simulador elimina o propósito da geração automática. Nesse contexto, desenvolvemos uma metodologia para se avaliar os simuladores gerados automaticamente através de perfilamento de código. Isto permitiu a identificação dos gargalos de desempenho e, consequentemente, o desenvolvimento de otimizações na geração de código. Com as otimizações, conseguimos gerar um simulador do modelo MIPS 1,48 vezes melhorAbstract: Automatic generation has a great advantage of automating a process. This reduces the time taken in this step and avoiding common mistakes. However, what is the advantage of reducing the time of a step if there is the possibility of increasing the time of the remaining steps? In digital circuit design, the architecture description languages emerged to make possible the development of tools that automatically generate simulators, compilers, and others tools, that we use to evaluate an architecture without it having a hardware itself. Automatically generated simulators run applications and verify their behavior and the architecture in design. But if the generated simulator is not efficient, the simulation time increases and can exceed the gain achieved by automatic generation, canceling its benefits. How to check the efficiency of the generated simulator in this case? A common option compares the generated simulator with other existing simulators. The other alternative is generating manually a simulator for comparison. The first choice requires that the simulators are similar and the second possibility eliminates the purpose of automatic generation. In this context, we have developed a methodology to evaluate the simulators automatically generated using code profiling. This allowed the identification of performance bottlenecks and, consequently, the development of optimizations on code generation. With the optimizations, we generated a MIPS simulator 1.48 times betterMestradoCiência da ComputaçãoMestre em Ciência da Computação01-P-3951/2011, 01-P-1965/2012CAPE

    Upgrade of a concatenative programming language interpreter for monitoring of executing program

    Get PDF
    The graduation thesis describes the design, upgrade and testing of computer software that supports monitoring of a running program written in concatenative programing language. We have used the Forth programming language, studied its design, selected the interpreter, upgrade it, create the support software, and conducted the elemental tests that have confirmed that the created software is appropriate. The purpose of the software is to provide an environment for the observation and analysis of decision problems

    The construction of high-performance virtual machines for dynamic languages

    Get PDF
    Dynamic languages, such as Python and Ruby, have become more widely used over the past decade. Despite this, the standard virtual machines for these languages have disappointing performance. These virtual machines are slow, not because methods for achieving better performance are unknown, but because their implementation is hard. What makes the implementation of high-performance virtual machines difficult is not that they are large pieces of software, but that there are fundamental and complex interdependencies between their components. In order to work together correctly, the interpreter, just-in-time compiler, garbage collector and library must all conform to the same precise low-level protocols. In this dissertation I describe a method for constructing virtual machines for dynamic languages, and explain how to design a virtual machine toolkit by building it around an abstract machine. The design and implementation of such a toolkit, the Glasgow Virtual Machine Toolkit, is described. The Glasgow Virtual Machine Toolkit automatically generates a just-in-time compiler, integrates precise garbage collection into the virtual machine, and automatically manages the complex inter-dependencies between all the virtual machine components. Two different virtual machines have been constructed using the GVMT. One is a minimal implementation of Scheme; which was implemented in under three weeks to demonstrate that toolkits like the GVMT can enable the easy construction of virtual machines. The second, the HotPy VM for Python, is a high-performance virtual machine; it demonstrates that a virtual machine built with a toolkit can be fast and that the use of a toolkit does not overly constrain the high-level design. Evaluation shows that HotPy outperforms the standard Python interpreter, CPython, by a large margin, and has performance on a par with PyPy, the fastest Python VM currently available

    Thinking FORTH: a language and philosophy for solving problems

    Get PDF
    XIV, 313 p. ; 24 cmLibro ElectrónicoThinking Forth is a book about the philosophy of problem solving and programming style, applied to the unique programming language Forth. Published first in 1984, it could be among the timeless classics of computer books, such as Fred Brooks' The Mythical Man-Month and Donald Knuth's The Art of Computer Programming. Many software engineering principles discussed here have been rediscovered in eXtreme Programming, including (re)factoring, modularity, bottom-up and incremental design. Here you'll find all of those and more - such as the value of analysis and design - described in Leo Brodie's down-to-earth, humorous style, with illustrations, code examples, practical real life applications, illustrative cartoons, and interviews with Forth's inventor, Charles H. Moore as well as other Forth thinkers. If you program in Forth, this is a must-read book. If you don't, the fundamental concepts are universal: Thinking Forth is meant for anyone interested in writing software to solve problems. The concepts go beyond Forth, but the simple beauty of Forth throws those concepts into stark relief. So flip open the book, and read all about the philosophy of Forth, analysis, decomposition, problem solving, style and conventions, factoring, handling data, and minimizing control structures. But be prepared: you may not be able to put it down. This book has been scanned, OCR'd, typeset in LaTeX, and brought back to print (and your monitor) by a collaborative effort under a Creative Commons license. http://thinking-forth.sourceforge.net/The Philosophy of Forth An Armchair History of Software Elegance; The Superficiality of Structure; Looking Back, and Forth; Component Programming; Hide From Whom?; Hiding the Construction of Data Structures; But Is It a High-Level Language?; The Language of Design; The Language of Performance; Summary; References Analysis The Nine Phases of the Programming Cycle; The Iterative Approach; The Value of Planning; The Limitations of Planning; The Analysis Phase; Defining the Interfaces; Defining the Rules; Defining the Data Structures; Achieving Simplicity; Budgeting and Scheduling; Reviewing the Conceptual Model; References Preliminary Design/Decomposition Decomposition by Component; Example: A Tiny Editor; Maintaining a Component-based Application; Designing and Maintaining a Traditional Application; The Interface Component; Decomposition by Sequential Complexity; The Limits of Level Thinking; Summary; For Further Thinking; Detailed Design/Problem Solving Problem-Solving Techniques; Interview with a Software Inventor; Detailed Design; Forth Syntax; Algorithms and Data Structures; Calculations vs. Data Structures vs. Logic; Solving a Problem: Computing Roman Numerals; Summary; References; For Further Thinking Implementation: Elements of Forth Style Listing Organization; Screen Layout; Comment Conventions; Vertical Format vs. Horizontal Format; Choosing Names: The Art; Naming Standards: The Science; More Tips for Readability; Summary; References Factoring Factoring Techniques; Factoring Criteria; Compile-Time Factoring; The Iterative Approach in Implementation; References Handling Data: Stacks and States The Stylish Stack; The Stylish Return Stack; The Problem With Variables; Local and Global Variables/Initialization; Saving and Restoring a State; Application Stacks; Sharing Components; The State Table; Vectored Execution; Using DOER/MAKE; Summary; References Minimizing Control Structures What’s So Bad about Control Structures?; How to Eliminate Control Structures; A Note on Tricks; Summary; References; For Further Thinking Forth’s Effect on Thinking Appendix A Overview of Forth (For Newcomers); Appendix B Defining DOER/MAKE; Appendix C Other Utilities Described in This Book; Appendix D Answers to “Further Thinking” Problems; Appendix E Summary of Style Conventions; Inde

    The application of message passing to concurrent programming

    Get PDF
    The development of concurrency in computer systems will be critically reviewed and an alternative strategy proposed. This is a programming language designed along semantic principles, and it is based upon the treatment of concurrent processes as values within that language's universe of discourse. An asynchronous polymorphic message system is provided to enable co-existent processes to communicate freely. This is presented as a fundamental language construct, and it is completely general purpose, as all values, however complex, can be passed as messages. Various operations are also built into the language so as to permit processes to discover and examine one another. These permit the development of robust systems, where localised failures can be detected, and action can be taken to recover. The orthogonality of the design is discussed and its implementation in terms of an incremental compiler and abstract machine interpreter is outlined in some detail. This thesis hopes to demonstrate that message-oriented communication in a highly parallel system of processes is not only a natural form of expression, but is eminently practical, so long as the entities performing the communication are values in the languag

    Microcomputer Based Simulation

    Get PDF
    Digital simulation is a useful tool in many scientific areas. Interactive simulation can provide the user with a better appreciation of a problem area. With the introduction of large scale integrated circuits and in particular the advent of the microprocessor, a large amount of computing power is available at low cost. The aim of this project therefore was to investigate the feasibility of producing a minimum cost, easy to use, interactive digital simulation system. A hardware microcomputer system was constructed to test simulation program concepts and an interactive program was designed and developed for this system. By the use of a set of commands and subsequent interactive dialogue, the program allows the user to enter and perform simulation tasks. The simulation program is unusual in that it does not require a sophisticated operating system or other system programs such as compilers. The program does not require any backup memory devices such as magnetic disc or tape and indeed could be stored in ROM or EPROM. The program is designed to be flexible and extendable and could be easily modified to run with a variety of hardware configurations. The highly interactive nature of the system means that its operation requires very little programming experience. The microcomputer hardware system uses two microprocessors together with specially designed interfaces. One was dedicated to the implementation of the simulation equations, and the other provided an input/output capability including a low cost CRT display
    corecore