22 research outputs found
Micro Virtual Machines: A Solid Foundation for Managed Language Implementation
Today new programming languages proliferate, but many of them
suffer from
poor performance and inscrutable semantics. We assert that the
root of
many of the performance and semantic problems of today's
languages is
that language implementation is extremely difficult. This
thesis
addresses the fundamental challenges of efficiently developing
high-level
managed languages.
Modern high-level languages provide abstractions over execution,
memory
management and concurrency. It requires enormous intellectual
capability
and engineering effort to properly manage these concerns.
Lacking such
resources, developers usually choose naive implementation
approaches
in the early stages of language design, a strategy which too
often has
long-term consequences, hindering the future development of the
language. Existing language development platforms have failed
to
provide the right level of abstraction, and forced implementers
to
reinvent low-level mechanisms in order to obtain performance.
My thesis is that the introduction of micro virtual machines will
allow
the development of higher-quality, high-performance managed
languages.
The first contribution of this thesis is the design of Mu, with
the
specification of Mu as the main outcome. Mu is
the first micro virtual machine, a robust, performant, and
light-weight
abstraction over just three concerns: execution, concurrency and
garbage
collection. Such a foundation attacks three of the most
fundamental and
challenging issues that face existing language designs and
implementations, leaving the language implementers free to focus
on the
higher levels of their language design.
The second contribution is an in-depth analysis of on-stack
replacement
and its efficient implementation. This low-level mechanism
underpins
run-time feedback-directed optimisation, which is key to the
efficient
implementation of dynamic languages.
The third contribution is demonstrating the viability of Mu
through
RPython, a real-world non-trivial language implementation. We
also did
some preliminary research of GHC as a Mu client.
We have created the Mu specification and its reference
implementation,
both of which are open-source. We show that that Mu's on-stack
replacement API can gracefully support dynamic languages such as
JavaScript, and it is implementable on concrete hardware. Our
RPython
client has been able to translate and execute non-trivial
RPython
programs, and can run the RPySOM interpreter and the core of the
PyPy
interpreter.
With micro virtual machines providing a low-level substrate,
language
developers now have the option to build their next language on a
micro
virtual machine. We believe that the quality of programming
languages
will be improved as a result
TTSS'11 - 5th International Workshop on Harnessing Theories for Tool Support in Software
The aim of the workshop is to bring together practitioners and researchers from academia, industry and government to present and discuss ideas about:
• How to deal with the complexity of software projects by multi-view modeling and separation of concerns about the design of functionality, interaction, concurrency, scheduling, and nonfunctional requirements, and
• How to ensure correctness and dependability of software by integrating formal methods and tools for modeling, design, verification and validation into design and development processes and environments.
• Case studies and experience reports about harnessing static analysis tools such as model checking, theorem proving, testing, as well as runtime monitoring
A computer-aided design for digital filter implementation
Imperial Users onl
Tools and Algorithms for the Construction and Analysis of Systems
This open access two-volume set constitutes the proceedings of the 26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The total of 60 regular papers presented in these volumes was carefully reviewed and selected from 155 submissions. The papers are organized in topical sections as follows: Part I: Program verification; SAT and SMT; Timed and Dynamical Systems; Verifying Concurrent Systems; Probabilistic Systems; Model Checking and Reachability; and Timed and Probabilistic Systems. Part II: Bisimulation; Verification and Efficiency; Logic and Proof; Tools and Case Studies; Games and Automata; and SV-COMP 2020
3rd Many-core Applications Research Community (MARC) Symposium. (KIT Scientific Reports ; 7598)
This manuscript includes recent scientific work regarding the Intel Single Chip Cloud computer and describes approaches for novel approaches for programming and run-time organization
Recommended from our members
Construction of a support tool for the design of the activity structures based computer system architectures
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.This thesis is a reapproachment of diverse design concepts, brought to bear upon the computer system
engineering problem of identification and control of highly constrained multiprocessing (HCM)
computer machines. It contributes to the area of meta/general systems methodology, and brings
a new insight into the design formalisms, and results afforded by bringing together various design
concepts that can be used for the construction of highly constrained computer system architectures.
A unique point of view is taken by assuming the process of identification and control of HCM
computer systems to be the process generated by the Activity Structures Methodology (ASM).
The research in ASM has emerged from the Neuroscience research, aiming at providing the
techniques for combining the diverse knowledge sources that capture the 'deep knowledge' of this
application field in an effective formal and computer representable form. To apply the ASM design
guidelines in the realm of the distributed computer system design, we provide new design definitions
for the identification and control of such machines in terms of realisations. These realisation definitions
characterise the various classes of the identification and control problem. The classes covered
consist of:
1. the identification of the designer activities,
2. the identification and control of the machine's distributed structures of behaviour,
3. the identification and control of the conversational environment activities (i.e. the randomised/
adaptive activities and interactions of both the user and the machine environments),
4. the identification and control of the substrata needed for the realisation of the machine, and
5. the identification of the admissible design data, both user-oriented and machineoriented,
that can force the conversational environment to act in a self-regulating
manner.
All extent results are considered in this context, allowing the development of both necessary
conditions for machine identification in terms of their distributed behaviours as well as the substrata
structures of the unknown machine and sufficient conditions in terms of experiments on the unknown
machine to achieve the self-regulation behaviour.
We provide a detailed description of the design and implementation of the support software tool
which can be used for aiding the process of constructing effective, HCM computer systems, based
on various classes of identification and control. The design data of a highly constrained system, the
NUKE, are used to verify the tool logic as well as the various identification and control procedures.
Possible extensions as well as future work implied by the results are considered.Government of Ira
Increasing the Performance and Predictability of the Code Execution on an Embedded Java Platform
This thesis explores the execution of object-oriented code on an embedded Java platform. It presents established and derives new approaches for the implementation of high-level object-oriented functionality and commonly expected system services. The goal of the developed techniques is the provision of the architectural base for an efficient and predictable code execution.
The research vehicle of this thesis is the Java-programmed SHAP platform. It consists of its platform tool chain and the highly-customizable SHAP bytecode processor. SHAP offers a fully operational embedded CLDC environment, in which the proposed techniques have been implemented, verified, and evaluated.
Two strands are followed to achieve the goal of this thesis. First of all, the sequential execution of bytecode is optimized through a joint effort of an optimizing offline linker and an on-chip application loader. Additionally, SHAP pioneers a reference coloring mechanism, which enables a constant-time interface method dispatch that need not be backed a large sparse dispatch table.
Secondly, this thesis explores the implementation of essential system services within designated concurrent hardware modules. This effort is necessary to decouple the computational progress of the user application from the interference induced by time-sharing software implementations of these services. The concrete contributions comprise
a spill-free, on-chip stack; a predictable method cache; and a concurrent garbage collection.
Each approached means is described and evaluated after the relevant state of the art has been reviewed. This review is not limited to preceding small embedded approaches but also includes techniques that have proven successful on larger-scale platforms. The other way around, the chances that these platforms may benefit from the techniques developed for SHAP are discussed
Implementation and performance aspects of Kahn process networks : an investigation of problem modeling, implementation techniques, and scheduling strategies
For å spare strøm og redusere oppheting kjører moderne prosessorer på lavere frekvens enn de tidligere prosessorene. Produsentene kompenserer performansetapet ved å innpakke flere kjerner i en brikke som da kan kjøre flere programmer samtidig. Selv om prosessorer med flere kjerner har større total regnekraft enn de tidligere prosessorer, kjører likevel de fleste eksisterende programmer tregere enn på de eldre prosessorer. Dette skjer fordi programmer flest er skrevet på en måte som tillater dem å utnytte kun en av flere kjerner. For at et program skal kunne utnytte flere kjerner, må det omskrives nesten fra bunnen av, som er tidskrevende og dyrt. Ikke minst, utivklerne må lære en helt ny tankemåte. I dette arbeidet, som ble utført i perioden 2005-2009 ved Institutt for informatikk og Simula, har vi undersøkt hvordan vi kan gjøre det lettere å utvikle parallelle programmer som bruker flere kjerner. Vi tok utgangspunktet i det matematiske rammeverket av ”Kahn process networks”, som stammer fra 1970-tallet, og implementerte et bibliotek som gjør det mulig at eksisterende programmer kan lett utvides til å bruke flere kjerner. Med bruk av vårt bibliotek vil programmer automatisk kunne bruke alle tilgjengelige kjerner i en datamaskin, uten noen endringer. Våre eksperimenter har også vist at tilpasning av eksisterende programmer til vårt bibliotek krever minimale endringer i eksisterende kode