435 research outputs found
Introduction to the Literature On Programming Language Design
This is an introduction to the literature on programming language design and related topics. It is intended to cite the most important work, and to provide a place for students to start a literature search
Introduction to the Literature on Programming Language Design
This is an introduction to the literature on programming language design and related topics. It is intended to cite the most important work, and to provide a place for students to start a literature search
Recommended from our members
A Survey of Parallel Programming Constructs
This paper surveys the types of parallelism found in Functional, Lisp and Object-Oriented languages. In particular, it concentrates on the addition of high level parallel constructs to these types of languages. The traditional area of the automatic extraction of parallelism by a compiler [39] is ignored here in favor of the addition of new constructs, because the long history of such automatic techniques has shown that they are not sufficient to allow the massive parallelism promised from modem computer architectures [26. 58]. The problem then, simply stated, is given that it is now possible for us to build massively parallel machines and given that our current compilers seem incapable of generating sufficient parallelism automatically, what should the language designer do? A reasonable answer seems to be to add constructs to languages that allow the expression of additional parallelism in a natural way. Indeed that is what the designers of the the Functional, Lisp, and Object-Oriented languages described below have attempted to do. The three particular programming formalisms were picked because most of the initial ideas seem to have been generated by the designers of the functional languages and most of the current activity seems to be in the Lisp and Objected-Oriented domains. There is also a great deal of activity in the Logic programming area, but this activity is more in the area of executing the existing constructs in parallel as opposed to adding constructs specifically designed to increase parallelism
Introduction to the Literature on Semantics
An introduction to the literature on semantics. Included are pointers to the literature on axiomatic semantics, denotational semantics, operational semantics, and type theory
Objects and polymorphism in system programming languages: a new approach
A low-level data structure always has a predefined representation which does not fit into an object of traditional object-oriented languages, where explicit type tag denotes its dynamic type. This is the main reason why the advanced features of object-oriented programming cannot be fully used at the lowest level. On the other hand, the hierarchy of low-level data structures is very similar to class-trees, but instead of an explicit tag-field the value of the object determines its dynamic type. Another peculiar requirement in system programming is that some classes have to be polymorphic by-value with their ancestor: objects must fit into the space of a superclass instance. In our paper we show language constructs which enable the system programmer to handle all data structures as objects, and exploit the advantages of object-oriented programming even at the lowest level. Our solution is based on Predicate Dispatching, but adopted to the special needs of system programming. The techniques we show also allow fo
r some classes to be polymorphic by-value with their super. We also describe how to implement these features without losing modularity
Debugging Trait Errors as Logic Programs
Rust uses traits to define units of shared behavior. Trait constraints build
up an implicit set of first-order hereditary Harrop clauses which is executed
by a powerful logic programming engine in the trait system. But that power
comes at a cost: the number of traits in Rust libraries is increasing, which
puts a growing burden on the trait system to help programmers diagnose errors.
Beyond a certain size of trait constraints, compiler diagnostics fall off the
edge of a complexity cliff, leading to useless error messages. Crate
maintainers have created ad-hoc solutions to diagnose common domain-specific
errors, but the problem of diagnosing trait errors in general is still open. We
propose a trait debugger as a means of getting developers the information
necessary to diagnose trait errors in any domain and at any scale. Our proposed
tool will extract proof trees from the trait solver, and it will interactively
visualize these proof trees to facilitate debugging of trait errors.Comment: 9 pages, 2 figure
Mutation, Aliasing, Viewpoints, Modular Reasoning, and Weak Behavioral Subtyping
Existing work on behavioral subtyping either ignores aliasing or restricts the behavior of additional methods in a subtype and only allows one to use invariants and history constraints in reasoning. This prevents many useful subtype relationships; for example, a type with immutable objects (e.g., immutable sequences), cannot have a behavioral subtype with mutable objects (e.g., mutable arrays). Furthermore, the associated reasoning principle is not very useful, since one cannot use the pre- and postconditions of methods. Weak behavioral subtyping permits more behavioral subtype relationships, does not restrict the behavior of additional methods in subtypes, and allows the use of pre- and postconditions in reasoning. The only cost is the need to restrict aliases so that objects cannot be manipulated through the view of more than one type
Workload characterization of JVM languages
Being developed with a single language in mind, namely Java, the Java Virtual Machine (JVM) nowadays is targeted by numerous programming languages. Automatic memory management, Just-In-Time (JIT) compilation, and adaptive optimizations provided by the JVM make it an attractive target for different language implementations. Even though being targeted by so many languages, the JVM has been tuned with respect to characteristics of Java programs only -- different heuristics for the garbage collector or compiler optimizations are focused more on Java programs. In this dissertation, we aim at contributing to the understanding of the workloads imposed on the JVM by both dynamically-typed and statically-typed JVM languages. We introduce a new set of dynamic metrics and an easy-to-use toolchain for collecting the latter. We apply our toolchain to applications written in six JVM languages -- Java, Scala, Clojure, Jython, JRuby, and JavaScript. We identify differences and commonalities between the examined languages and discuss their implications. Moreover, we have a close look at one of the most efficient compiler optimizations - method inlining. We present the decision tree of the HotSpot JVM's JIT compiler and analyze how well the JVM performs in inlining the workloads written in different JVM languages
- …