424 research outputs found

    Reducing Complexity of Java Source Codes in Structural Testing by Using Program Slicing

    Get PDF
    Structural testing is one of the techniques of software testing. It tests only the structure of the source code while comparing expected results and actual results. Generally, structural testing takes a long time to perform its task and not possible. Sometimes, only a small portion of the program is relevant. This can be done by program slicing. Program Slicing is to decompose the program into smaller units that depends on different types of dependencies between the program statements. The different types of program slicing are forward slicing, backward slicing, complete slicing, dynamic and static slicing, etc. Moreover, there is Tree Slicing which is also a key technique to slice and merge different Symbolic Execution (SE) sub-trees under some specific conditions.  In this paper, we combine Tree Slicing technique and Indus Kaveri where Indus is a robust framework for analyzing and slicing concurrent Java programs, and Kaveri is a feature-rich Eclipse-based GUI front end for Indus slicing. Then we present the experimental results in order to reduce the complexity of the java source code

    Timing Sensitive Dependency Analysis and its Application to Software Security

    Get PDF
    Ich präsentiere neue Verfahren zur statischen Analyse von Ausführungszeit-sensitiver Informationsflusskontrolle in Softwaresystemen. Ich wende diese Verfahren an zur Analyse nebenläufiger Java Programme, sowie zur Analyse von Ausführungszeit-Seitenkanälen in Implementierungen kryptographischer Primitive. Methoden der Informationsflusskontrolle zielen darauf ab, Fluss von Informationen (z.B.: zwischen verschiedenen externen Schnittstellen einer Software-Komponente) anhand expliziter Richtlinien einzuschränken. Solche Methoden können daher zur Einhaltung sowohl von Vertraulichkeit als auch Integrität eingesetzt werden. Der Ziel korrekter statischer Programmanalysen in diesem Umfeld ist der Nachweis, dass in allen Ausführungen eines gegebenen Programms die zugehörigen Richtlinien eingehalten werden. Ein solcher Nachweis erfordert ein Sicherheitskriterium, welches formalisiert, unter welchen Bedingungen dies der Fall ist. Jedem formalen Sicherheitskriterium entspricht implizit ein Programm- und Angreifermodell. Einfachste Nichtinterferenz-Kriterien beschreiben beispielsweise nur nicht-interaktive Programme. Dies sind Programme die nur bei Beginn und Ende der Ausführung Ein- und Ausgaben erlauben. Im zugehörigen Angreifer-Modell kennt der Angreifer das Programm, aber beobachtet nur bestimmte (öffentliche) Aus- und Eingaben oder stellt diese bereit. Ein Programm ist nichtinterferent, wenn der Angreifer aus seinen Beobachtungen keinerlei Rückschlüsse auf geheime Aus- und Eingaben terminierender Ausführungen machen kann. Aus nicht-terminierenden Ausführungen hingegen sind dem Angreifer in diesem Modell Schlussfolgerungen auf geheime Eingaben erlaubt. Seitenkanäle entstehen, wenn einem Angreifer aus Beobachtungen realer Systeme Rückschlüsse auf vertrauliche Informationen ziehen kann, welche im formalen Modell unmöglich sind. Typische Seitenkanäle (also: in vielen formalen Sicherheitskriterien unmodelliert) sind neben Nichttermination beispielsweise auch Energieverbrauch und die Ausführungszeit von Programmen. Hängt diese von geheimen Eingaben ab, so kann ein Angreifer aus der beobachteten Ausführungszeit auf die Eingabe (z.B.: auf den Wert einzelner geheimer Parameter) schließen. In meiner Dissertation präsentiere ich neue Abhängigkeitsanalysen, die auch Nichtterminations- und Ausführungszeitkanäle berücksichtigen. In Hinblick auf Nichtterminationskanäle stelle ich neue Verfahren zur Berechnung von Programm-Abhängigkeiten vor. Hierzu entwickle ich ein vereinheitlichendes Rahmenwerk, in welchem sowohl Nichttermination-sensitive als auch Nichttermination-insensitive Abhängigkeiten aus zueinander dualen Postdominanz-Begriffen resultieren. Für Ausführungszeitkanäle entwickle ich neue Abhängigkeitsbegriffe und dazugehörige Verfahren zu deren Berechnung. In zwei Anwendungen untermauere ich die These: Ausführungszeit-sensitive Abhängigkeiten ermöglichen korrekte statische Informationsfluss-Analyse unter Berücksichtigung von Ausführungszeitkanälen. Basierend auf Ausführungszeit-sensitiven Abhängigkeiten entwerfe ich hierfür neue Analysen für nebenläufige Programme. Ausführungszeit-sensitive Abhängigkeiten sind dort selbst für Ausführungszeit-insensitive Angreifermodelle relevant, da dort interne Ausführungszeitkanäle zwischen unterschiedlichen Ausführungsfäden extern beobachtbar sein können. Meine Implementierung für nebenläufige Java Programme basiert auf auf dem Programmanalyse- System JOANA. Außerdem präsentiere ich neue Analysen für Ausführungszeitkanäle aufgrund mikro-architektureller Abhängigkeiten. Exemplarisch untersuche ich Implementierungen von AES256 Blockverschlüsselung. Bei einigen Implementierungen führen Daten-Caches dazu, dass die Ausführungszeit abhängt von Schlüssel und Geheimtext, wodurch diese aus der Ausführungszeit inferierbar sind. Für andere Implementierungen weist meine automatische statische Analyse (unter Annahme einer einfachen konkreten Cache-Mikroarchitektur) die Abwesenheit solcher Kanäle nach

    Program Tailoring: Slicing by Sequential Criteria

    Get PDF
    Protocol and typestate analyses often report some sequences of statements ending at a program point P that needs to be scrutinized, since P may be erroneous or imprecisely analyzed. Program slicing focuses only on the behavior at P by computing a slice of the program affecting the values at P. In this paper, we propose to restrict our attention to the subset of that behavior at P affected by one or several statement sequences, called a sequential criterion (SC). By leveraging the ordering information in a SC, e.g., the temporal order in a few valid/invalid API method invocation sequences, we introduce a new technique, program tailoring, to compute a tailored program that comprises the statements in all possible execution paths passing through at least one sequence in SC in the given order. With a prototyping implementation, Tailor, we show why tailoring is practically useful by conducting two case studies on seven large real-world Java applications. For program debugging and understanding, Tailor can complement program slicing by removing SC-irrelevant statements. For program analysis, Tailor can enable a pointer analysis, which is unscalable to a program, to perform a more focused and therefore potentially scalable analysis to its specific parts containing hard language features such as reflection

    Non-Intrusive Online Timing Analysis of Large Embedded Applications

    Get PDF
    A thorough understanding of the timing behavior of embedded systems software has become very important. With the advent of ever more complex embedded software e.g. in autonomous driving, the size of this software is growing at a fast pace. Execution time profiles (ETP) have proven to be a useful way to understand the timing behavior of embedded software. Collecting these ETPs was either limited to small applications or required multiple runs of the same software for calibration processes. In this contribution, we present a novel method for collecting ETPs in a single shot of the software at very high quality even for large applications

    Dynamic Slicing for Deep Neural Networks

    Full text link
    Program slicing has been widely applied in a variety of software engineering tasks. However, existing program slicing techniques only deal with traditional programs that are constructed with instructions and variables, rather than neural networks that are composed of neurons and synapses. In this paper, we propose NNSlicer, the first approach for slicing deep neural networks based on data flow analysis. Our method understands the reaction of each neuron to an input based on the difference between its behavior activated by the input and the average behavior over the whole dataset. Then we quantify the neuron contributions to the slicing criterion by recursively backtracking from the output neurons, and calculate the slice as the neurons and the synapses with larger contributions. We demonstrate the usefulness and effectiveness of NNSlicer with three applications, including adversarial input detection, model pruning, and selective model protection. In all applications, NNSlicer significantly outperforms other baselines that do not rely on data flow analysis.Comment: 11 pages, ESEC/FSE '2

    Slicing of Concurrent Programs and its Application to Information Flow Control

    Get PDF
    This thesis presents a practical technique for information flow control for concurrent programs with threads and shared-memory communication. The technique guarantees confidentiality of information with respect to a reasonable attacker model and utilizes program dependence graphs (PDGs), a language-independent representation of information flow in a program

    Program analysis of temporal memory mismanagement

    Full text link
    In the use of C/C++ programs, the performance benefits obtained from flexible low-level memory access and management sacrifice language-level support for memory safety and garbage collection. Memory-related programming mistakes are introduced as a result, rendering C/C++ programs prone to memory errors. A common category of programming mistakes is defined by the misplacement of deallocation operations, also known as temporal memory mismanagement, which can generate two types of bugs: (1) use-after-free (UAF) bugs and (2) memory leaks. The former are severe security vulnerabilities that expose programs to both data and control-flow exploits, while the latter are critical performance bugs that compromise software availability and reliability. In the case of UAF bugs, existing solutions that almost exclusively rely on dynamic analysis suffer from limitations, including low code coverage, binary incompatibility, and high overheads. In the case of memory leaks, detection techniques are abundant; however, fixing techniques have been poorly investigated. In this thesis, we present three novel program analysis frameworks to address temporal memory mismanagement in C/C++. First, we introduce Tac, the first static UAF detection framework to combine typestate analysis with machine learning. Tac identifies representative features to train a Support Vector Machine to classify likely true/false UAF candidates, thereby providing guidance for typestate analysis used to locate bugs with precision. We then present CRed, a pointer analysis-based framework for UAF detection with a novel context-reduction technique and a new demand-driven path-sensitive pointer analysis to boost scalability and precision. A major advantage of CRed is its ability to substantially and soundly reduce search space without losing bug-finding ability. This is achieved by utilizing must-not-alias information to truncate unnecessary segments of calling contexts. Finally, we propose AutoFix, an automated memory leak fixing framework based on value-flow analysis and static instrumentation that can fix all leaks reported by any front-end detector with negligible overheads safely and with precision. AutoFix tolerates false leaks with a shadow memory data structure carefully designed to keep track of the allocation and deallocation of potentially leaked memory objects. The contribution of this thesis is threefold. First, we advance existing state-of-the-art solutions to detecting memory leaks by proposing a series of novel program analysis techniques to address temporal memory mismanagement. Second, corresponding prototype tools are fully implemented in the LLVM compiler framework. Third, an extensive evaluation of open-source C/C++ benchmarks is conducted to validate the effectiveness of the proposed techniques
    corecore