9 research outputs found

    A Praise for Defensive Programming: Leveraging Uncertainty for Effective Malware Mitigation

    Full text link
    A promising avenue for improving the effectiveness of behavioral-based malware detectors would be to combine fast traditional machine learning detectors with high-accuracy, but time-consuming deep learning models. The main idea would be to place software receiving borderline classifications by traditional machine learning methods in an environment where uncertainty is added, while software is analyzed by more time-consuming deep learning models. The goal of uncertainty would be to rate-limit actions of potential malware during the time consuming deep analysis. In this paper, we present a detailed description of the analysis and implementation of CHAMELEON, a framework for realizing this uncertain environment for Linux. CHAMELEON offers two environments for software: (i) standard - for any software identified as benign by conventional machine learning methods and (ii) uncertain - for software receiving borderline classifications when analyzed by these conventional machine learning methods. The uncertain environment adds obstacles to software execution through random perturbations applied probabilistically on selected system calls. We evaluated CHAMELEON with 113 applications and 100 malware samples for Linux. Our results showed that at threshold 10%, intrusive and non-intrusive strategies caused approximately 65% of malware to fail accomplishing their tasks, while approximately 30% of the analyzed benign software to meet with various levels of disruption. With a dynamic, per-system call threshold, CHAMELEON caused 92% of the malware to fail, and only 10% of the benign software to be disrupted. We also found that I/O-bound software was three times more affected by uncertainty than CPU-bound software. Further, we analyzed the logs of software crashed with non-intrusive strategies, and found that some crashes are due to the software bugs

    Diagnosys: Automatic Generation of a Debugging Interface to the Linux Kernel

    Get PDF
    Best Paper awardInternational audienceThe Linux kernel does not export a stable, well-defined kernel interface, complicating the development of kernel-level services, such as device drivers and file systems. While there does exist a set of functions that are exported to external modules, this set of functions frequently changes, and the functions have implicit, ill-documented preconditions. No specific debugging support is provided. We present \textit{Diagnosys}, an approach to automatically constructing a debugging interface for the Linux kernel. First, a designated kernel maintainer ses Diagnosys to identify constraints on the use of the exported functions. Based on this information, developers of kernel services can then use Diagnosys to generate a debugging interface specialized to their code. When a service including this interface is tested, it records information about potential problems. This information is preserved following a kernel crash or hang. Our experiments show that the generated debugging interface provides useful log information and incurs a low performance penalty

    The Effect of Applying Design of Experiments Techniques to Software Performance Testing

    Get PDF
    Effective software performance testing is essential to the development and delivery of quality software products. Many software testing investigations have reported software performance testing improvements, but few have quantitatively validated measurable software testing performance improvements across an aggregate of studies. This study addressed that gap by conducting a meta-analysis to assess the relationship between applying Design of Experiments (DOE) techniques in the software testing process and the reported software performance testing improvements. Software performance testing theories and DOE techniques composed the theoretical framework for this study. Software testing studies (n = 96) were analyzed, where half had DOE techniques applied and the other half did not. Five research hypotheses were tested, where findings were measured in (a) the number of detected defects, (b) the rate of defect detection, (c) the phase in which the defect was detected, (d) the total number of hours it took to complete the testing, and (e) an overall hypothesis which included all measurements for all findings. The data were analyzed by first computing standard difference in means effect sizes, then through the Z test, the Q test, and the t test in statistical comparisons. Results of the meta-analysis showed that applying DOE techniques in the software testing process improved software performance testing (p \u3c 05). These results have social implications for the software testing industry and software testing professionals, providing another empirically-validated testing methodology. Software organizations can use this methodology to differentiate their software testing process, to create more quality products, and to benefit the consumer and society in general

    Regression testing framework for test cases generation and prioritization

    Get PDF
    A regression test is a significant part of software testing. It is used to find the maximum number of faults in software applications. Test Case Prioritization (TCP) is an approach to prioritize and schedule test cases. It is used to detect faults in the earlier stage of testing environment. Code coverage is one of the features of a Regression Test (RT) that detects more number of faults from a software application. However, code coverage and fault detection are reducing the performance of existing test case prioritization by consuming a lot of time for scanning an entire code. The process of generating test cases plays an important role in the prioritization of test cases. The existing automated generation and prioritization techniques produces insufficient test cases that cause less fault detection rate or consumes more computation time to detect more faults. Unified Modelling Language (UML) based test case generation techniques can extract test cases from UML diagrams by covering maximum part of a module of an application. Therefore, a UML based test case generation can support a test case prioritization technique to find a greater number of faults with shorter execution time. A multi-objective optimization technique able to handle multiple objectives that supports RT to generate more number of test cases as well as increase fault detection rate and produce a better result. The aim of this research is to develop a framework to detect maximum number of faults with less execution time for improving the RT. The performance of the RT can be improved by an efficient test case generation and prioritization method based on a multi-objective optimization technique by handling both test cases and rate of fault detection. This framework consists of two important models: Test Case Generation (TCG) and TCP. The TCG model requires an UML use case diagram to extract test cases. A meta heuristic approach is employed that uses tokens for generating test cases. And, TCP receives the extracted test cases with faults as input to produce the prioritized set of test cases. The proposed research has modified the existing Hill Climbing based TCP by altering its test case swapping feature and detect faults in a reasonable execution time. The proposed framework intends to improve the performance of regression testing by generating and prioritizing test cases in order to find a greater number of faults in an application. Two case studies are conducted in the research in order to gather Test Case (TC) and faults for multiple modules. The proposed framework yielded a 92.2% of Average Percentage Fault Detection with less amount of testing time comparing to the other artificial intelligence-based TCP. The findings were proved that the proposed framework produced a sufficient amount of TC and found the maximum number of faults in less amount of time

    Techniques for Identifying Elusive Corner-Case Bugs in Systems Software

    Get PDF
    Modern software is plagued by elusive corner-case bugs (e.g., security bugs). Because there are no scalable, automated ways of finding them, such bugs can remain hidden until software is deployed in production. This thesis proposes approaches to solve this problem. First, we present black-box and white-box fault injection mechanisms, which allow developers to test the behavior of their own code in the presence of failures in external components, e.g., in libraries, in the kernel, or in remote nodes of a distributed system. We describe how to make black-box fault injection more efficient, by prioritizing tests based on their estimated impact. For white-box testing, we proposed and implemented a technique to find Trojan messages in distributed systems, i.e., messages that are accepted as valid by receiver nodes, yet cannot be sent by any correct sender node. We show that Trojan messages can lead to subtle semantic bugs. We used fault injection techniques to find new bugs in systems such as the MySQL database, the Apache HTTP server, the FSP file service protocol suite, and the PBFT Byzantine-fault-tolerant replication library. Testing can find bugs and build confidence in the correctness of a system. However, exhaustive testing is often unfeasible, and therefore testing may not discover all bugs before a system is deployed. In the second part of this thesis, we describe how to automatically harden production systems, reducing the impact of any corner-case bugs missed by testing. We present a framework that reduces the overhead cost of instrumentation tools such as memory error detectors. Lowering the cost enables system developers to use such tools in production to harden their systems, reducing the impact of any remaining corner-case bugs. We used our framework to generate a version of the Linux kernel hardened with Address Sanitizer. Our hardened kernel has most of the benefit of full instrumentation: it detects the same vulnerabilities as full instrumentation (7 out of 11 privilege escalation exploits from 2013-2014 can be detected using instrumentation tools). Yet, it obtains these benefits at only a quarter of the overhead

    Contributions for improving debugging of kernel-level services in a monolithic operating system

    Get PDF
    Alors que la recherche sur la qualité du code des systèmes a connu un formidable engouement, les systèmes d exploitation sont encore aux prises avec des problèmes de fiabilité notamment dus aux bogues de programmation au niveau des services noyaux tels que les pilotes de périphériques et l implémentation des systèmes de fichiers. Des études ont en effet montré que chaque version du noyau Linux contient entre 600 et 700 fautes, et que la propension des pilotes de périphériques à contenir des erreurs est jusqu à sept fois plus élevée que toute autre partie du noyau. Ces chiffres suggèrent que le code des services noyau n est pas suffisamment testé et que de nombreux défauts passent inaperçus ou sont difficiles à réparer par des programmeurs non-experts, ces derniers formant pourtant la majorité des développeurs de services. Cette thèse propose une nouvelle approche pour le débogage et le test des services noyau. Notre approche est focalisée sur l interaction entre les services noyau et le noyau central en abordant la question des trous de sûreté dans le code de définition des fonctions de l API du noyau. Dans le contexte du noyau Linux, nous avons mis en place une approche automatique, dénommée Diagnosys, qui repose sur l analyse statique du code du noyau afin d identifier, classer et exposer les différents trous de sûreté de l API qui pourraient donner lieu à des fautes d exécution lorsque les fonctions sont utilisées dans du code de service écrit par des développeurs ayant une connaissance limitée des subtilités du noyau. Pour illustrer notre approche, nous avons implémenté Diagnosys pour la version 2.6.32 du noyau Linux. Nous avons montré ses avantages à soutenir les développeurs dans leurs activités de tests et de débogage.Despite the existence of an overwhelming amount of research on the quality of system software, Operating Systems are still plagued with reliability issues mainly caused by defects in kernel-level services such as device drivers and file systems. Studies have indeed shown that each release of the Linux kernel contains between 600 and 700 faults, and that the propensity of device drivers to contain errors is up to seven times higher than any other part of the kernel. These numbers suggest that kernel-level service code is not sufficiently tested and that many faults remain unnoticed or are hard to fix bynon-expert programmers who account for the majority of service developers. This thesis proposes a new approach to the debugging and testing of kernel-level services focused on the interaction between the services and the core kernel. The approach tackles the issue of safety holes in the implementation of kernel API functions. For Linux, we have instantiated the Diagnosys automated approach which relies on static analysis of kernel code to identify, categorize and expose the different safety holes of API functions which can turn into runtime faults when the functions are used in service code by developers with limited knowledge on the intricacies of kernel code. To illustrate our approach, we have implemented Diagnosys for Linux 2.6.32 and shown its benefits in supporting developers in their testing and debugging tasks.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    Efficient Testing of Recovery Code Using Fault Injection

    No full text
    A critical part of developing a reliable software system is testing its recovery code. This code is traditionally difficult to test in the lab, and, in the field, it rarely gets to run; yet, when it does run, it must execute flawlessly in order to recover the system from failure. In this article, we present a library-level fault injection engine that enables the productive use of fault injection for software testing. We describe automated techniques for reliably identifying errors that applications may encounter when interacting with their environment, for automatically identifying high-value injection targets in program binaries, and for producing efficient injection test scenarios. We present a framework for writing precise triggers that inject desired faults, in the form of error return codes and corresponding side effects, at the boundary between applications and libraries. These techniques are embodied in LFI, a new fault injection engine we are distributing http://lfi.epfl.ch. This article includes a report of our initial experience using LFI. Most notably, LFI found 12 serious, previously unreported bugs in the MySQL database server, Git version control system, BIND name server, Pidgin IM client, and PBFT replication system with no developer assistance and no access to source code. LFI also increased recovery-code coverage from virtually zero up to 60% entirely automatically without requiring new tests or human involvement
    corecore