15 research outputs found

    DAG-Based Attack and Defense Modeling: Don't Miss the Forest for the Attack Trees

    Full text link
    This paper presents the current state of the art on attack and defense modeling approaches that are based on directed acyclic graphs (DAGs). DAGs allow for a hierarchical decomposition of complex scenarios into simple, easily understandable and quantifiable actions. Methods based on threat trees and Bayesian networks are two well-known approaches to security modeling. However there exist more than 30 DAG-based methodologies, each having different features and goals. The objective of this survey is to present a complete overview of graphical attack and defense modeling techniques based on DAGs. This consists of summarizing the existing methodologies, comparing their features and proposing a taxonomy of the described formalisms. This article also supports the selection of an adequate modeling technique depending on user requirements

    Formalizing and verifying software architectures

    Get PDF
    Master'sMASTER OF SCIENC

    Detection of Lightweight Directory Access Protocol Query Injection Attacks in Web Applications

    Get PDF
    The Lightweight Directory Access Protocol (LDAP) is a common protocol used in organizations for Directory Service. LDAP is popular because of its features such as representation of data objects in hierarchical form, being open source and relying on TCP/IP, which is necessary for Internet access. However, with LDAP being used in a large number of web applications, different types of LDAP injection attacks are becoming common. The idea behind LDAP injection attacks is to take advantage of an application not validating inputs before being used as part of LDAP queries. An attacker can provide inputs that may result in alteration of intended LDAP query structure. LDAP injection attacks can lead to various types of security breaches including (i) Login Bypass, (ii) Information Disclosure, (iii) Privilege Escalation, and (iv) Information Alteration. Despite many research efforts focused on traditional SQL Injection attacks, most of the proposed techniques cannot be suitably applied for mitigating LDAP injection attacks due to syntactic and semantic differences between LDAP and SQL queries. Many implemented web applications remain vulnerable to LDAP injection attacks. In particular, there has been little attention for testing web applications to detect the presence of LDAP query injection attacks. The aim of this thesis is two folds: First, study various types of LDAP injection attacks and vulnerabilities reported in the literature. The planned research is to critically examine and evaluate existing injection mitigation techniques using a set of open source applications reported to be vulnerable to LDAP query injection attacks. Second, propose an approach to detect LDAP injection attacks by generating test cases when developing secure web applications. In particular, the thesis focuses on specifying signatures for detecting LDAP injection attack types using Object Constraint Language (OCL) and evaluates the proposed approach using PHP web applications. We also measure the effectiveness of generated test cases using a metric named Mutation Score

    Defect Prediction using Exception Handling Method Call Structures

    Get PDF
    The main purpose of exception handling mechanisms is to improve software robustness by handling exceptions when they occur. However, empirical evidence indicates that improper implementation of exception handling code can be a source of faults in software systems. There is still limited empirical knowledge about the relationship between exception handling code and defects. In this dissertation, we present three case studies investigating defect densities of exception handling code. The results show that in every system under study, the defect density of exception handling code was significantly higher than the defect density of overall source code and normal code. The ability to predict the location of faults can assist in directing quality enhancement efforts to modules that are likely to have faults. This information can be used to guide test plans, narrow the test space, and improve software quality. We hypothesize that complicated exception handling structure is a predictive factor that is associated with defects. To the best of our knowledge, no study has addressed the relationship between the attributes of exception handling method call structures and defect occurrence, nor has prior work addressed fault prediction. We extract exception-based software metrics from the structural attributes of exception handling call graphs. To find out whether there are patterns of relationship between exception-based software metrics and fault-proneness, we propose a defect prediction model using exception handling call structures. We apply the J48 algorithm, which is the Java implementation of the C4.5 algorithm, to build exception defect prediction models. In two out of three systems under study, the results reveal that there are logical patterns of relationships between most class level exception metrics and fault-proneness. The accuracy of our prediction models is comparable to the results of defect prediction model studies in the literature. It was observed that our approach has somewhat worse predictive accuracy when a system has low average defects per class

    Defect Prediction using Exception Handling Method Call Structures

    Get PDF
    The main purpose of exception handling mechanisms is to improve software robustness by handling exceptions when they occur. However, empirical evidence indicates that improper implementation of exception handling code can be a source of faults in software systems. There is still limited empirical knowledge about the relationship between exception handling code and defects. In this dissertation, we present three case studies investigating defect densities of exception handling code. The results show that in every system under study, the defect density of exception handling code was significantly higher than the defect density of overall source code and normal code. The ability to predict the location of faults can assist in directing quality enhancement efforts to modules that are likely to have faults. This information can be used to guide test plans, narrow the test space, and improve software quality. We hypothesize that complicated exception handling structure is a predictive factor that is associated with defects. To the best of our knowledge, no study has addressed the relationship between the attributes of exception handling method call structures and defect occurrence, nor has prior work addressed fault prediction. We extract exception-based software metrics from the structural attributes of exception handling call graphs. To find out whether there are patterns of relationship between exception-based software metrics and fault-proneness, we propose a defect prediction model using exception handling call structures. We apply the J48 algorithm, which is the Java implementation of the C4.5 algorithm, to build exception defect prediction models. In two out of three systems under study, the results reveal that there are logical patterns of relationships between most class level exception metrics and fault-proneness. The accuracy of our prediction models is comparable to the results of defect prediction model studies in the literature. It was observed that our approach has somewhat worse predictive accuracy when a system has low average defects per class

    Réduction à la volée du volume des traces d'exécution pour l'analyse d'applications multimédia de systèmes embarqués

    Get PDF
    The consumer electronics market is dominated by embedded systems due to their ever-increasing processing power and the large number of functionnalities they offer.To provide such features, architectures of embedded systems have increased in complexity: they rely on several heterogeneous processing units, and allow concurrent tasks execution.This complexity degrades the programmability of embedded system architectures and makes application execution difficult to understand on such systems.The most used approach for analyzing application execution on embedded systems consists in capturing execution traces (event sequences, such as system call invocations or context switch, generated during application execution).This approach is used in application testing, debugging or profiling.However in some use cases, execution traces generated can be very large, up to several hundreds of gigabytes.For example endurance tests, which are tests consisting in tracing execution of an application on an embedded system during long periods, from several hours to several days.Current tools and methods for analyzing execution traces are not designed to handle such amounts of data.We propose an approach for monitoring an application execution by analyzing traces on the fly in order to reduce the volume of recorded trace.Our approach is based on features of multimedia applications which contribute the most to the success of popular devices such as set-top boxes or smartphones.This approach consists in identifying automatically the suspicious periods of an application execution in order to record only the parts of traces which correspond to these periods.The proposed approach consists of two steps: a learning step which discovers regular behaviors of an application from its execution trace, and an anomaly detection step which identifies behaviors deviating from the regular ones.The many experiments, performed on synthetic and real-life datasets, show that our approach reduces the trace size by an order of magnitude while maintaining a good performance in detecting suspicious behaviors.Le marché de l'électronique grand public est dominé par les systèmes embarqués du fait de leur puissance de calcul toujours croissante et des nombreuses fonctionnalités qu'ils proposent.Pour procurer de telles caractéristiques, les architectures des systèmes embarqués sont devenues de plus en plus complexes (pluralité et hétérogénéité des unités de traitements, exécution concurrente des tâches, ...).Cette complexité a fortement influencé leur programmabilité au point où rendre difficile la compréhension de l'exécution d'une application sur ces architectures.L'approche la plus utilisée actuellement pour l'analyse de l'exécution des applications sur les systèmes embarqués est la capture des traces d'exécution (séquences d'événements, tels que les appels systèmes ou les changements de contexte, générés pendant l'exécution des applications).Cette approche est utilisée lors des activités de test, débogage ou de profilage des applications.Toutefois, suivant certains cas d'utilisation, les traces d'exécution générées peuvent devenir très volumineuses, de l'ordre de plusieurs centaines de gigaoctets.C'est le cas des tests d'endurance ou encore des tests de validation, qui consistent à tracer l'exécution d'une application sur un système embarqué pendant de longues périodes, allant de plusieurs heures à plusieurs jours.Les outils et méthodes d'analyse de traces d'exécution actuels ne sont pas conçus pour traiter de telles quantités de données.Nous proposons une approche de réduction du volume de trace enregistrée à travers une analyse à la volée de la trace durant sa capture.Notre approche repose sur les spécificités des applications multimédia, qui sont parmi les plus importantes pour le succès des dispositifs populaires comme les Set-top boxes ou les smartphones.Notre approche a pour but de détecter automatiquement les fragments (périodes) suspectes de l'exécution d'une application afin de n'enregistrer que les parties de la trace correspondant à ces périodes d'activités.L'approche que nous proposons comporte deux étapes : une étape d'apprentissage qui consiste à découvrir les comportements réguliers de l'application à partir de la trace d'exécution, et une étape de détection d'anomalies qui consiste à identifier les comportements déviant des comportements réguliers.Les nombreuses expériences, réalisées sur des données synthétiques et des données réelles, montrent que notre approche permet d'obtenir une réduction du volume de trace enregistrée d'un ordre de grandeur avec d'excellentes performances de détection des comportements suspects

    Concurrency and static analysis

    Get PDF
    The thesis describes three important contributions developed during my doctoral course, all involving the use and the verification of concurrent Java code: Binary decision diagrams, or BDDs, are data structures for the representation of Boolean functions. These functions are of great importance in many fields. It turns out that BDDs are the state-of-the-art representation for Boolean functions, and indeed all real world applications use a BDD library to represent and manipulate Boolean functions. It can be desirable to perform Boolean operations from different threads at the same time. In order to do this, the BDD library in use must allow threads to access BDD data safely, avoiding race conditions. We developed a Java BDD library, that is fast in both single and multi-threaded applications, that we use in the Julia static program analyzer. We defined a sound static analysis that identifies if and where a Java bytecode program lets data flow from tainted user input (including servlet requests) into critical operations that might give rise to injections. Data flow is a prerequisite to injections, but the user of the analysis must later gage the actual risk of the flow. Namely, analysis approximations might lead to false alarms and proper input validation might make actual flows harmless. Our analysis works by translating Java bytecode into Boolean formulas that express all possible explicit flows of tainted data. The choice of Java bytecode simplifies the semantics and its abstraction (many high-level constructs must not be explicitly considered) and lets us analyze programs whose source code is not available, as is typically the case in industrial contexts that use software developed by third parties, such as banks. The standard approach to prevent data races is to follow a locking discipline while accessing shared data: always hold a given lock when accessing a given shared datum. It is all too easy for a programmer to violate the locking discipline. Therefore, tools are desirable for formally expressing the locking discipline and for verifying adherence to it. The book Java Concurrency in Practice (JCIP) proposed the @GuardedBy annotation to express a locking discipline. The original @GuardedBy annotation was designed for simple intra-class synchronization policy declaration. @GuardedBy fields and methods are supposed to be accessed only when holding the appropriate lock, referenced by another field, in the body of the class (or this). In simple cases, a quick visual inspection of the class code performed by the programmer is sufficient to verify the synchronization policy correctness. However, when we think deeper about the meaning of this annotation, and when we try to check and infer it, some ambiguities rise. Given these ambiguities of the specification for @GuardedBy, different tools interpret it in different ways. Moreover, it does not prevent data races, thus not satisfying its design goals. We provide a formal specification that satisfies its design goals and prevents data races. We have also implemented our specification in the Julia analyzer, that uses abstract interpretation to infer valid @GuardedBy annotations for unannotated programs. It is not the goal of this implementation to detect data races or give a guarantee that they do not exist. Julia determines what locking discipline a program uses, without judging whether the discipline is too strict or too lax for some particular purpose

    Abstracts on Radio Direction Finding (1899 - 1995)

    Get PDF
    The files on this record represent the various databases that originally composed the CD-ROM issue of "Abstracts on Radio Direction Finding" database, which is now part of the Dudley Knox Library's Abstracts and Selected Full Text Documents on Radio Direction Finding (1899 - 1995) Collection. (See Calhoun record https://calhoun.nps.edu/handle/10945/57364 for further information on this collection and the bibliography). Due to issues of technological obsolescence preventing current and future audiences from accessing the bibliography, DKL exported and converted into the three files on this record the various databases contained in the CD-ROM. The contents of these files are: 1) RDFA_CompleteBibliography_xls.zip [RDFA_CompleteBibliography.xls: Metadata for the complete bibliography, in Excel 97-2003 Workbook format; RDFA_Glossary.xls: Glossary of terms, in Excel 97-2003 Workbookformat; RDFA_Biographies.xls: Biographies of leading figures, in Excel 97-2003 Workbook format]; 2) RDFA_CompleteBibliography_csv.zip [RDFA_CompleteBibliography.TXT: Metadata for the complete bibliography, in CSV format; RDFA_Glossary.TXT: Glossary of terms, in CSV format; RDFA_Biographies.TXT: Biographies of leading figures, in CSV format]; 3) RDFA_CompleteBibliography.pdf: A human readable display of the bibliographic data, as a means of double-checking any possible deviations due to conversion
    corecore