9 research outputs found

    Static dependence analysis for java

    Get PDF
    Πολλές εφαρμογές λειτουργούν χρησιμοποιώντας ευαίσθητα ή αναξιόπιστα δεδομένα, όπως τα δεδομένα που παίρνουν από τον χρήστη. Για αυτόν τον λόγο, είναι χρήσιμο να μπορεί κάποιος να δει τη ροή τέτοιου είδους δεδομένων μέσα σε μια εφαρμογή, καθώς και τα μέρη της εφαρμογής των οποίων η εκτέλεση επηρεάζεται από αυτά. Παρουσιάζουμε μία ανάλυση εξαρτήσεων, η οποία δεδομένων κάποιων εντολών εισόδου σε ένα πρόγραμμα Java, υπολογίζει μια υπερτίμηση των μερών του προγράμματος η εκτέλεση των οποίων μπορεί να επηρεαστεί από αυτή την είσοδο. Η κεντρική ιδέα είναι πως τα δεδομένα που δίνει ο χρήστης δεν θα πρέπει να επηρεάζουν κάποιο ευαίσθητο σημείο του προγράμματος, αλλά και πως, αντιστρόφως, ο χρήστης δεν θα πρέπει να λαμβάνει πληροφορίες που αφορούν ευαίσθητα δεδομένα. Η ανάλυση γράφτηκε στην γλώσσα Datalog και βασίζεται στο Doop framework. Συγκεκριμένα, η λογική της ανάλυσης υλοποιείται σε περίπου 200 γραμμές Datalog, κάτι που είναι ενδεικτικό για τις δυνατότητες του Doop στην δημιουργία συμπαγών και εκφραστικών στατικών ανάλυσεων.Many applications use sensitive or untrusted data, such as data provided by a user. For this reason, it is useful to be able to discern the flow of such kinds of data inside an application, as well as the parts of the application whose execution is dependent on them. We present a dependence analysis which, given some source instructions in a Java pro- gram, computes an overapproximation of the parts of the program whose execution can be influenced by these instructions. The central idea is that user-supplied data should not be able to influence critical parts of the program and that, inversely, sensitive data in the program should not leak out information to the user. The analysis was written in the Datalog language and is based on the Doop framework. Specifically, the analysis logic was implemented in about 200 lines of Datalog code, some- thing that demonstrates the capabilities of Doop in the creation of compact and expressive static analyses

    The Java system dependence graph

    Get PDF
    The Program Dependence Graph was introduced by Ottenstein and Ottenstein in 1984 [14]. It was suggested to be a suitable internal program representation for monolithic programs, for the purpose of carrying out certain software engineering operations such as slicing and the computation of program metrics. Since then, Horwitz et al. have introduced the multi-procedural equivalent System Dependence Graph [9]. Many authors have proposed object-oriented dependence graph construction approaches [11, 10, 20, 12]. Every approach provides its own benefits, some of which are language specific. This paper is based on Java and combines the most important benefits from a range of approaches. The result is a Java System Dependence Graph, which summarises the key benefits offered by different approaches and adapts them (if necessary) to the Java language

    A sound dependency analysis for secure information flow (extended version)

    Get PDF
    In this paper we present a flow-sensitive analysis for secure information flow for Java bytecode. Our approach consists in computing, at different program points, a dependency graph which tracks how input values of a method may influence its outputs. This computation subsumes a points-to analysis (reflecting how objects depend on each others) by addressing dependencies arising from data of primitive type and from the control flow of the program. Our graph construction is proved to be sound by establishing a non-interference theorem stating that an output value is unrelated with an input one in the dependency graph if the output remains unchanged when the input is modified. In contrast with many type-based information flow techniques, our approach does not require security levels to be known during the computation of the graph: security aspects of information flow are checked by labeling "a posteriori" the dependency graph with security levels

    Flow-sensitive, context-sensitive, and object-sensitive information flow control based on program dependance graphs

    Get PDF
    Information flow control (IFC) checks whether a program can leak secret data to public ports, or whether critical computations can be influenced from outside. But many IFC analyses are imprecise, as they are flow-insensitive, context-insensitive, or object-insensitive; resulting in false alarms. We argue that IFC must better exploit modern program analysis technology, and present an approach based on pro-gram dependence graphs (PDG). PDGs have been developed over the last 20 years as a standard device to represent information flow in a program, and today can handle realistic programs. In particular, our dependence graph generator for full Java bytecode is used as the basis for an IFC implementation which is more precise and needs less annotations than traditional approaches. We explain PDGs for sequential and multi-threaded pro-grams, and explain precision gains due to flow-, context-, and object-sensitivity. We then augment PDGs with a lattice of security levels and introduce the flow equations for IFC. We describe algorithms for flow computation in detail and prove their correctness. We then extend flow equations to handle declassification, and prove that our algorithm respects monotonicity of release. Finally, examples demonstrate that our implementation can check realistic sequential programs in full Java bytecode

    Dynamic optimization through the use of automatic runtime specialization

    Get PDF
    Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (leaves 99-115).by John Whaley.S.B.and M.Eng

    Guided Testing of Concurrent Programs Using Value Schedules

    Get PDF
    Testing concurrent programs remains a difficult task due to the non-deterministic nature of concurrent execution. Many approaches have been proposed to tackle the complexity of uncovering potential concurrency bugs. Static analysis tackles the problem by analyzing a concurrent program looking for situations/patterns that might lead to possible errors during execution. In general, static analysis cannot precisely locate all possible concurrent errors. Dynamic testing examines and controls a program during its execution also looking for situations/patterns that might lead to possible errors during execution. In general, dynamic testing needs to examine all possible execution paths to detect all errors, which is intractable. Motivated by these observation, a new testing technique is developed that uses a collaboration between static analysis and dynamic testing to find the first potential error but using less time and space. In the new collaboration scheme, static analysis and dynamic testing interact iteratively throughout the testing process. Static analysis provides coarse-grained flow-information to guide the dynamic testing through the relevant search space, while dynamic testing collects concrete runtime-information during the guided exploration. The concrete runtime-information provides feedback to the static analysis to refine its analysis, which is then feed forward to provide more precise guidance of the dynamic testing. The new collaborative technique is able to uncover the first concurrency-related bug in a program faster using less storage than the state-of-the-art dynamic testing-tool Java PathFinder. The implementation of the collaborative technique consists of a static-analysis module based on Soot and a dynamic-analysis module based on Java PathFinder

    Hybrid analysis of memory references and its application to automatic parallelization

    Get PDF
    Executing sequential code in parallel on a multithreaded machine has been an elusive goal of the academic and industrial research communities for many years. It has recently become more important due to the widespread introduction of multicores in PCs. Automatic multithreading has not been achieved because classic, static compiler analysis was not powerful enough and program behavior was found to be, in many cases, input dependent. Speculative thread level parallelization was a welcome avenue for advancing parallelization coverage but its performance was not always optimal due to the sometimes unnecessary overhead of checking every dynamic memory reference. In this dissertation we introduce a novel analysis technique, Hybrid Analysis, which unifies static and dynamic memory reference techniques into a seamless compiler framework which extracts almost maximum available parallelism from scientific codes and incurs close to the minimum necessary run time overhead. We present how to extract maximum information from the quantities that could not be sufficiently analyzed through static compiler methods, and how to generate sufficient conditions which, when evaluated dynamically, can validate optimizations. Our techniques have been fully implemented in the Polaris compiler and resulted in whole program speedups on a large number of industry standard benchmark applications
    corecore