797 research outputs found

    Learned Garbage Collection

    Get PDF
    Several programming languages use garbage collectors (GCs) to automatically manage memory for the programmer. Such collectors must decide when to look for unreachable objects to free, which can have a large performance impact on some applications. In this preliminary work, we propose a design for a learned garbage collector that autonomously learns over time when to perform collections. By using reinforcement learning, our design can incorporate user-defined reward functions, allowing an autonomous garbage collector to learn to optimize the exact metric the user desires (e.g., request latency or queries per second). We conduct an initial experimental study on a prototype, demonstrating that an approach based on tabular Q learning may be promising

    ACTiCLOUD: Enabling the Next Generation of Cloud Applications

    Get PDF
    Despite their proliferation as a dominant computing paradigm, cloud computing systems lack effective mechanisms to manage their vast amounts of resources efficiently. Resources are stranded and fragmented, ultimately limiting cloud systems' applicability to large classes of critical applications that pose non-moderate resource demands. Eliminating current technological barriers of actual fluidity and scalability of cloud resources is essential to strengthen cloud computing's role as a critical cornerstone for the digital economy. ACTiCLOUD proposes a novel cloud architecture that breaks the existing scale-up and share-nothing barriers and enables the holistic management of physical resources both at the local cloud site and at distributed levels. Specifically, it makes advancements in the cloud resource management stacks by extending state-of-the-art hypervisor technology beyond the physical server boundary and localized cloud management system to provide a holistic resource management within a rack, within a site, and across distributed cloud sites. On top of this, ACTiCLOUD will adapt and optimize system libraries and runtimes (e.g., JVM) as well as ACTiCLOUD-native applications, which are extremely demanding, and critical classes of applications that currently face severe difficulties in matching their resource requirements to state-of-the-art cloud offerings

    A Co-Processor Approach for Efficient Java Execution in Embedded Systems

    Get PDF
    This thesis deals with a hardware accelerated Java virtual machine, named REALJava. The REALJava virtual machine is targeted for resource constrained embedded systems. The goal is to attain increased computational performance with reduced power consumption. While these objectives are often seen as trade-offs, in this context both of them can be attained simultaneously by using dedicated hardware. The target level of the computational performance of the REALJava virtual machine is initially set to be as fast as the currently available full custom ASIC Java processors. As a secondary goal all of the components of the virtual machine are designed so that the resulting system can be scaled to support multiple co-processor cores. The virtual machine is designed using the hardware/software co-design paradigm. The partitioning between the two domains is flexible, allowing customizations to the resulting system, for instance the floating point support can be omitted from the hardware in order to decrease the size of the co-processor core. The communication between the hardware and the software domains is encapsulated into modules. This allows the REALJava virtual machine to be easily integrated into any system, simply by redesigning the communication modules. Besides the virtual machine and the related co-processor architecture, several performance enhancing techniques are presented. These include techniques related to instruction folding, stack handling, method invocation, constant loading and control in time domain. The REALJava virtual machine is prototyped using three different FPGA platforms. The original pipeline structure is modified to suit the FPGA environment. The performance of the resulting Java virtual machine is evaluated against existing Java solutions in the embedded systems field. The results show that the goals are attained, both in terms of computational performance and power consumption. Especially the computational performance is evaluated thoroughly, and the results show that the REALJava is more than twice as fast as the fastest full custom ASIC Java processor. In addition to standard Java virtual machine benchmarks, several new Java applications are designed to both verify the results and broaden the spectrum of the tests.Siirretty Doriast

    Improving Energy Consumption Of Java Programs

    Get PDF
    Information and Communications Technologies (ICT) amounts for 10% of the world energy which will keep on growing in the future and 3% of the overall carbon footprint which is now more than the level of CO2 emission as that of the aviation industry. For many past years, the focus was on hardware to optimize the energy consumption of ICT systems. This includes dynamic adaptation of hardware techniques such as fine-grain clock gating, power gating, and dynamic voltage/frequency scaling. However, recent demands of exascale computation, as well as the increasing carbon footprint, require new breakthroughs to make ICT systems more energy-efficient. This is not possible by only making the hardware energy-efficient. As a result, the focus is shifting on software now. Software is one of the most critical bottlenecks while trying to optimize the energy consumption of any ICT system. Software energy consumption can be optimized in several ways like choosing the energy-efficient option in a programming language, using an energy-efficient programming language or choosing an energy-efficient compiling option. In this work, we concentrate on the energy-efficient options and command-line options to optimize software energy consumption. Today’s programming languages provide software developers with several options to perform the same task. For example, in Java, an Array can be copied to other Array either manually or using Java methods. However, not every option available is energy-efficient and the software developers lack the knowledge to choose the best energy-efficient option. We perform various analyses to decide on choosing the best option for different components of Java programming language. These components include data types, operators, control statements, String, exceptions, objects, and Arrays. Java has different command-line options that can be used to tune the JVM. These options can significantly affect the energy behavior of Java applications. We conduct a comprehensive study to evaluate the energy efficiency of Java command-line options. We first stabilize the idle energy consumption of two ICT systems and then evaluate the active energy consumption of SPECjvm2008 benchmarks using different JDKs (Open and Oracle) and Java command-line options. The Java command-line options include client, server, Xbatch, Xcomp, Xfuture, Xint, Xmixed, Xrs, AggressiveOpts, AggressiveHeap, Inline, AlwaysPreTouch, Xnoclassgc, UseSerialGC, UseParallelGC, UseConcMarkSweepGC, and UseG1GC. Next, we present Java Energy Profiler and Optimizer (JEPO) tool to help software developers to write energy-efficient code. This tool is an Eclipse IDE plugin and provides energy efficiency suggestions for Java programming language. It can provide suggestions dynamically while writing code or statically to refactor already written code. For providing suggestions, it analyzes each line of Java file and matches it to the pool of suggestions. JEPO can also help the software developers to automatically measure energy consumption at method granularity to determine the energy-hungry Java methods in software. We hope our findings and tool can help software developers to write energy-efficient code in the future

    Some Notes on the Past and Future of Lisp-Stat

    Get PDF
    Lisp-Stat was originally developed as a framework for experimenting with dynamic graphics in statistics. To support this use, it evolved into a platform for more general statistical computing. The choice of the Lisp language as the basis of the system was in part coincidence and in part a very deliberate decision. This paper describes the background behind the choice of Lisp, as well as the advantages and disadvantages of this choice. The paper then discusses some lessons that can be drawn from experience with Lisp-Stat and with the R language to guide future development of Lisp-Stat, R, and similar systems.

    Run-time compilation techniques for wireless sensor networks

    No full text
    Wireless sensor networks research in the past decade has seen substantial initiative,support and potential. The true adoption and deployment of such technology is highly dependent on the workforce available to implement such solutions. However, embedded systems programming for severely resource constrained devices, such as those used in typical wireless sensor networks (with tens of kilobytes of program space and around ten kilobytes of memory), is a daunting task which is usually left for experienced embedded developers.Recent initiative to support higher level programming abstractions for wireless sensor networks by utilizing a Java programming paradigm for resource constrained devices demonstrates the development benefits achieved. However, results have shown that an interpreter approach greatly suffers from execution overheads. Run-time compilation techniques are often used in traditional computing to make up for such execution overheads. However, the general consensus in the field is that run-time compilation techniques are either impractical, impossible, complex, or resource hungry for such resource limited devices.In this thesis, I propose techniques to enable run-time compilation for such severely resource constrained devices. More so, I show not only that run-time compilation is in fact both practical and possible by using simple techniques which do not require any more resources than that of interpreters, but also that run-time compilation substantially increases execution efficiency when compared to an interpreter

    ON OPTIMIZATIONS OF VIRTUAL MACHINE LIVE STORAGE MIGRATION FOR THE CLOUD

    Get PDF
    Virtual Machine (VM) live storage migration is widely performed in the data cen- ters of the Cloud, for the purposes of load balance, reliability, availability, hardware maintenance and system upgrade. It entails moving all the state information of the VM being migrated, including memory state, network state and storage state, from one physical server to another within the same data center or across different data centers. To minimize its performance impact, this migration process is required to be transparent to applications running within the migrating VM, meaning that ap- plications will keep running inside the VM as if there were no migration operations at all. In this dissertation, a thorough literature review is conducted to provide a big picture of the VM live storage migration process, its problems and existing solutions. After an in-depth examination, we observe that a severe IO interference between the VM IO threads and migration IO threads exists and causes both types of the IO threads to suffer from performance degradation. This interference stems from the fact that both types of IO threads share the same critical IO path by reading from and writing to the same shared storage system. Owing to IO resource contention and requests interference between the two different types of IO requests, not only will the IO request queue lengthens in the storage system, but the time-consuming disk seek operations will also become more frequent. Based on this fundamental observation, this dissertation research presents three related but orthogonal solutions that tackle the IO interference problem in order to improve the VM live storage migration performance. First, we introduce the Workload-Aware IO Outsourcing scheme, called WAIO, to improve the VM live storage migration efficiency. Second, we address this problem by proposing a novel scheme, called SnapMig, to improve the VM live storage migration efficiency and eliminate its performance impact on user applications at the source server by effectively leveraging the existing VM snapshots in the backup servers. Third, we propose the IOFollow scheme to improve both the VM performance and migration performance simultaneously. Finally, we outline the direction for the future research work. Advisor: Hong Jian
    corecore