154 research outputs found

    Run-time compilation techniques for wireless sensor networks

    No full text
    Wireless sensor networks research in the past decade has seen substantial initiative,support and potential. The true adoption and deployment of such technology is highly dependent on the workforce available to implement such solutions. However, embedded systems programming for severely resource constrained devices, such as those used in typical wireless sensor networks (with tens of kilobytes of program space and around ten kilobytes of memory), is a daunting task which is usually left for experienced embedded developers.Recent initiative to support higher level programming abstractions for wireless sensor networks by utilizing a Java programming paradigm for resource constrained devices demonstrates the development benefits achieved. However, results have shown that an interpreter approach greatly suffers from execution overheads. Run-time compilation techniques are often used in traditional computing to make up for such execution overheads. However, the general consensus in the field is that run-time compilation techniques are either impractical, impossible, complex, or resource hungry for such resource limited devices.In this thesis, I propose techniques to enable run-time compilation for such severely resource constrained devices. More so, I show not only that run-time compilation is in fact both practical and possible by using simple techniques which do not require any more resources than that of interpreters, but also that run-time compilation substantially increases execution efficiency when compared to an interpreter

    A selective dynamic compiler for embedded Java virtual machine targeting ARM processors

    Get PDF
    Tableau d’honneur de la FacultĂ© des Ă©tudes supĂ©rieures et postdoctorales, 2004-2005Ce travail prĂ©sente une nouvelle technique de compilation dynamique sĂ©lective pour les systĂšmes embarquĂ©s avec processeurs ARM. Ce compilateur a Ă©tĂ© intĂ©grĂ© dans la plateforme J2ME/CLDC (Java 2 Micro Edition for Connected Limited Device Con- figuration). L’objectif principal de notre travail est d’obtenir une machine virtuelle accĂ©lĂ©rĂ©e, lĂ©gĂšre et compacte prĂȘte pour l’exĂ©cution sur les systĂšmes embarquĂ©s. Cela est atteint par l’implĂ©mentation d’un compilateur dynamique sĂ©lectif pour l’architecture ARM dans la Kilo machine virtuelle de Sun (KVM). Ce compilateur est appelĂ© Armed E-Bunny. PremiĂšrement, on prĂ©sente la plateforme Java, le Java 2 Micro Edition(J2ME) pour les systĂšmes embarquĂ©s et les composants de la machine virtuelle Java. Ensuite, on discute les diffĂ©rentes techniques d’accĂ©lĂ©ration pour la machine virtuelle Java et on dĂ©taille le principe de la compilation dynamique. Enfin, on illustre l’architecture, le design (la conception), l’implĂ©mentation et les rĂ©sultats expĂ©rimentaux de notre compilateur dynamique sĂ©lective Armed E-Bunny. La version modifiĂ©e de KVM a Ă©tĂ© portĂ©e sur un ordinateur de poche (PDA) et a Ă©tĂ© testĂ©e en utilisant un benchmark standard de J2ME. Les rĂ©sultats expĂ©rimentaux de la performance montrent une accĂ©lĂ©ration de 360 % par rapport Ă  la derniĂšre version de la KVM de Sun avec un espace mĂ©moire additionnel qui n’excĂšde pas 119 kilobytes.This work presents a new selective dynamic compilation technique targeting ARM 16/32-bit embedded system processors. This compiler is built inside the J2ME/CLDC (Java 2 Micro Edition for Connected Limited Device Configuration) platform. The primary objective of our work is to come up with an efficient, lightweight and low-footprint accelerated Java virtual machine ready to be executed on embedded machines. This is achieved by implementing a selective ARM dynamic compiler called Armed E-Bunny into Sun’s Kilobyte Virtual Machine (KVM). We first present the Java platform, Java 2 Micro Edition (J2ME) for embedded systems and Java virtual machine components. Then, we discuss the different acceleration techniques for Java virtual machine and we detail the principle of dynamic compilation. After that we illustrate the architecture, design, implementation and experimental results of our selective dynamic compiler Armed E-Bunny. The modified KVM is ported on a handheld PDA and is tested using standard J2ME benchmarks. The experimental results on its performance demonstrate that a speedup of 360% over the last version of Sun’s KVM is accomplished with a footprint overhead that does not exceed 119 kilobytes

    Java Dust: How Small Can Embedded Java Be?

    Get PDF
    Java is slowly being accepted as a language and platform for embedded devices. However, the memory requirements of the Java library and runtime are still troublesome. A Java system is considered small when it requires less than 1 MB, and within the embedded domain small microcontollers with a few KB on-chip Flash memory and even less on-chip RAM are very common. For such small devices Java is a clearly challenging. In this paper we present the combination of the Java compiler Muvium for microcontrollers with the tiny soft-core Leros for an FPGA. To the best of our knowledge, the presented embedded Java system is the smallest Java system available. The Leros processor consumes less than 5 % of the logic cells of the smallest FPGA from Altera and the Muvium compiler produces a JVM, including the Java application, that can execute in a few KB ROM and less than 1 KB RAM. The Leros processor is available in open-source and the Leros port of Muvium is freely available

    Proxy compilation for Java via a code migration technique

    Get PDF
    There is an increasing trend that intermediate representations (IRs) are used to deliver programs in more and more languages, such as Java. Although Java can provide many advantages, including a wider portability and better optimisation opportunities on execution, it introduces extra overhead by requiring an IR translation for the program execution. For maximum execution performance, an optimising compiler is placed in the runtime to selectively optimise code regions regarded as “hotspots”. This common approach has been effectively deployed in many implementation of programming languages. However, the computational resources demanded by this approach made it less efficient, or even difficult to deploy directly in a resourceconstrained environment. One implementation approach is to use a remote compilation technique to support compilation during the execution. The work presented in this dissertation supports the thesis that execution performance can be improved by the use of efficient optimising compilation by using a proxy dynamic optimising compiler. After surveying various approaches to the design and implementation of remote compilation, a proxy compilation system called Apus is defined. To demonstrate the effectiveness of using a dynamic optimising compiler as a proxy compiler, a complete proxy compilation system is written based on a research-oriented Java VirtualMachine (JVM). The proxy compilation system is discussed in detail, showing how to deliver remote binaries and manage a cache of binaries by using a code migration approach. The proxy compilation client shows how the proxy compilation service is integrated with the selective optimisation system to maximise execution performance. The results of empirical measurements of the system are given, showing the efficiency of code optimisation from either the proxy compilation service and a local binary cache. The conclusion of this work is that Java execution performance can be improved by efficient optimising compilation with a proxy compilation service by using a code migration technique

    Java for Cost Effective Embedded Real-Time Software

    Get PDF

    Acceleration and semantic foundations of embedded Java platforms

    Get PDF
    Tableau d'honneur de la Faculté des études supérieures et postdoctorales, 2006-200

    ShareJIT: JIT Code Cache Sharing across Processes and Its Practical Implementation

    Get PDF
    Just-in-time (JIT) compilation coupled with code caching are widely used to improve performance in dynamic programming language implementations. These code caches, along with the associated profiling data for the hot code, however, consume significant amounts of memory. Furthermore, they incur extra JIT compilation time for their creation. On Android, the current standard JIT compiler and its code caches are not shared among processes---that is, the runtime system maintains a private code cache, and its associated data, for each runtime process. However, applications running on the same platform tend to share multiple libraries in common. Sharing cached code across multiple applications and multiple processes can lead to a reduction in memory use. It can directly reduce compile time. It can also reduce the cumulative amount of time spent interpreting code. All three of these effects can improve actual runtime performance. In this paper, we describe ShareJIT, a global code cache for JITs that can share code across multiple applications and multiple processes. We implemented ShareJIT in the context of the Android Runtime (ART), a widely used, state-of-the-art system. To increase sharing, our implementation constrains the amount of context that the JIT compiler can use to optimize the code. This exposes a fundamental tradeoff: increased specialization to a single process' context decreases the extent to which the compiled code can be shared. In ShareJIT, we limit some optimization to increase shareability. To evaluate the ShareJIT, we tested 8 popular Android apps in a total of 30 experiments. ShareJIT improved overall performance by 9% on average, while decreasing memory consumption by 16% on average and JIT compilation time by 37% on average.Comment: OOPSLA 201
    • 

    corecore