7 research outputs found

    Static Data Flow Analysis for Realtime Java Runtime Error Detection

    Get PDF
    International audienceThe strict and clear semantics of Java make it an ideal language for static analysis. Nevertheless, the use of programwide pointer analysis for proving the absence of Java runtime error conditions such as null pointer uses or illegal array indices is still not widespread. Current uses of program-wide pointer analysis focus on extracting information for optimisations in compilers. In this case, imprecise analysis results only in less agressive optimisation, which is often tolerable.Existing implementations of program-wide data flow analysis either lack the required accuracy to prove the absence of large enough numbers of certain errors or cause an explosion in analysis time and space requirements for non-trivial applications. Low accuracy leads to large numbers of “false positives”, i.e., code for which the analysis fails to prove that a certain error condition does not occur. An explosion in the analysis effort causes the analysis not to produce any useful results within reasonable time at all.The approach presented in this paper applies the analysis to create a correctness proof of safety-critical Java applications. This includes the absence of runtime errors (null pointer use, division by zero, etc.), the absence of potential deadlocks, the correctness of region-based memory management and the determination of resource constraints (heap and stack use)

    Probabilistic Points-to Analysis for Java

    Get PDF
    Abstract. Probabilistic points-to analysis is an analysis technique for defining the probabilities on the points-to relations in programs. It provides the compiler with some optimization chances such as speculative dead store elimination, speculative redundancy elimination, and speculative code scheduling. Although several static probabilistic points-to analysis techniques have been developed for C language, they cannot be applied directly to Java because they do not handle the classes, objects, inheritances and invocations of virtual methods. In this paper, we propose a context-insensitive and flow-sensitive probabilistic points-to analysis for Java (JPPA) for statically predicting the probability of points-to relations at all program points (i.e., points before or after statements) of a Java program. JPPA first constructs an interprocedural control flow graph (ICFG) for a Java program, whose edges are labeled with the probabilities calculated by an algorithm based on a static branch prediction approach, and then calculates the probabilistic points-to relations of the program based upon the ICFG. We have also developed a tool called Lukewarm to support JPPA and conducted an experiment to compare JPPA with a traditional context-insensitive and flow-sensitive points-to analysis approach. The experimental results show that JPPA is a precise and effective probabilistic points-to analysis technique for Java

    Transparent pointer compression for linked data structures

    Full text link
    64-bit address spaces are increasingly important for modern applications, but they come at a price: pointers use twice as much memory, reducing the effective cache capacity and memory bandwidth of the system (compared to 32-bit ad-dress spaces). This paper presents a sophisticated, auto-matic transformation that shrinks pointers from 64-bits to 32-bits. The approach is “macroscopic, ” i.e., it operates on an entire logical data structure in the program at a time. It allows an individual data structure instance or even a subset thereof to grow up to 232 bytes in size, and can compress pointers to some data structures but not others. Together, these properties allow efficient usage of a large (64-bit) ad-dress space. We also describe (but have not implemented) a dynamic version of the technique that can transparently expand the pointers in an individual data structure if it ex-ceeds the 4GB limit. For a collection of pointer-intensive benchmarks, we show that the transformation reduces peak heap sizes substantially by (20 % to 2x) for several of these benchmarks, and improves overall performance significantly in some cases

    Polymorphic versus monomorphic flow-insensitive points-to analysis for C

    No full text
    1 Introduction When constructing a constraint-based program analysis, the analysis designer must weigh the costs and benefits of many possible design points. Two important tradeoffs are:- Is the analysis polymorphic or monomorphic? A polymorphic analysis separates analysis information by call site, while monomorphic analysis conflates all call sites. A polymorphic analysis is more precise but also more expensive than a corresponding monomorphic analysis.- What is the underlying constraint relation? Possibilities include equalitie

    COMRADE: A High Level Compiler for Adaptive Computer Systems

    Get PDF
    Rechenleistung wird klassisch durch schnellere oder zusätzliche Mikroprozessoren gesteigert. Dagegen lagert ein Adaptives Computersystem (ACS) rechenintensive Teile in rekonfigurierbare Hardware (HW) aus. Da die Programmerstellung für ein ACS von Hand Wissen über den Entwurf von Software (SW) und Hardware vom Entwickler verlangt und viel Zeit verbraucht, wurde der Compiler COMRADE entwickelt. COMRADE partitioniert einen C-Quelltext in einen SW-Teil und eine HW-Konfiguration für ein ACS. Dabei werden automatisch Regionen für eine HW-Beschleunigung bestimmt und HW-Konfiguration und SW zu einem lauffähigen Ganzen zusammengesetzt. COMRADE kann durch neue HW-Modulbibliotheken und Konfigurationsdateien leicht an neue Zielarchitekturen angepasst werden. Er unterstützt den vollen C-Sprachumfang. Eine verbesserte HW-SW-Partitionierung vergrößert den synthetisierten HW-Anteil. Heuristiken in COMRADE reduzieren die Nachteile der hohen Rekonfigurationszeiten von aktuellen Logikbausteinen durch Zusammenfassen mehrerer HW-Teile zu Konfigurationen. Ein Großteil der während der Programmausführung auf einem ACS sonst nötigen Konfigurationen kann so eingespart werden. Die Steuerung der HW übernimmt eine neu entwickelte Datenflussmaschine. Neben Hardware-Pipelines werden spekulative Berechnungen unterstützt. Aus mehreren Ergebnissen wird das richtige für die weitere Berechnung ausgewählt. Durch das neuartige Ausführungsmodell ergibt sich gegenüber herkömmlichen spekulativen Mechanismen ein Laufzeitgewinn. Die Anwendung verschiedener Optimierungen auf Hochsprachenebene zeigte wesentlichen Einfluss auf die Größe und die Geschwindigkeit der erzeugten HW. Nachgewiesen wurden der Vorteil jeder einzelnen Optimierung sowie die höhere Effektivität der Kombination von Optimierungen. Durch die in COMRADE implementierten Optimierungen konnten der HW-Verbrauch wie auch die Anzahl der Speicherzugriffe wesentlich reduziert werden.Traditionally, computer performance is increased by using faster or additional processors. Adaptive computers (AC) accelerate applications by executing software (SW) on reconfigurable hardware. The programming of an AC generally requires experience beyond conventional software programming, namely detailed knowledge in hardware (HW) architecture and design. COMRADE helps the programmer to develop program for ACs. It partitions C source code into HW configurations and SW. Automatically, HW regions for acceleration are determined, and these HW configurations are linked to the SW to create the program for the AC. COMRADE is easily adaptable to new target architectures with help of new HW module libraries and configuration files. It supports dusty deck C source code. Improved HW/SW partitioning increases the synthesizable HW part of translated application. Heuristics reduce disadvantages of high reconfiguration times of actual reconfigurable logic chips. It combines several HW configurations to one configuration. Thus, many configurations during the program execution are not needed anymore. The HW execution is controlled by a new developed data flow machine. Besides HW pipelining, speculative executions are supported. The right result is chosen from several in parallel calculated values. Because of the new execution model, the runtime of calculations in HW is increased compared to traditional data flow machines. Several optimizations are included in COMRADE. They achieve better performance and decreased resource requirements of the created HW. Besides advantages of single optimizations, the combination of optimizations showed higher efficiency than every single optimization
    corecore