8 research outputs found

    Reducing the Software Risk in Ground Systems

    Get PDF
    Presentation to provide and overview of software's role in ground systems and how the security of the software is important and how it can be improved

    Information Leakage Analysis of Complex C Code and Its application to OpenSSL

    Get PDF
    This research was supported by EPSRC grant EP/K032011/

    Development of a static analysis tool to find securty vulnerabilities in java applications

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2010Includes bibliographical references (leaves: 57-60)Text in English Abstract: Turkish and Englishix, 77 leavesThe scope of this thesis is to enhance a static analysis tool in order to find security limitations in java applications. This will contribute to the removal of some of the existing limitations related with the lack of java source codes. The generally used tools for a static analysis are FindBugs, Jlint, PMD, ESC/Java2, Checkstyle. In this study, it is aimed to utilize PMD static analysis tool which already has been developed to find defects Possible bugs (empty try/catch/finally/switch statements), Dead code (unused local variables, parameters and private methods), Suboptimal code (wasteful String/StringBuffer usage), Overcomplicated expressions (unnecessary if statements for loops that could be while loops), Duplicate code (copied/pasted code means copied/pasted bugs). On the other hand, faults possible unexpected exception, length may be less than zero, division by zero, stream not closed on all paths and should be a static inner class cases were not implemented by PMD static analysis tool. PMD performs syntactic checks and dataflow analysis on program source code.In addition to some detection of clearly erroneous code, many of the .bugs. PMD looks for are stylistic conventions whose violation might be suspicious under some circumstances. For example, having a try statement with an empty catch block might indicate that the caught error is incorrectly discarded. Because PMD includes many detectors for bugs that depend on programming style, PMD includes support for selecting which detectors or groups of detectors should be run. While PMD.s main structure was conserved, boundary overflow vulnerability rules have been implemented to PMD

    Towards Better Static Analysis Security Testing Methodologies

    Get PDF
    Software vulnerabilities have been a significant attack surface used in cyberattacks, which have been escalating recently. Software vulnerabilities have caused substantial damage, and thus there are many techniques to guard against them. Nevertheless, detecting and eliminating software vulnerabilities from the source code is the best and most effective solution in terms of protection and cost. Static Analysis Security Testing (SAST) tools spot vulnerabilities and help programmers to remove the vulnerabilities. The fundamental problem is that modern software continues to evolve and shift, making detecting vulnerabilities more difficult. Hence, this thesis takes a step toward highlighting the features required to be present in the SAST tools to address software vulnerabilities in modern software. The thesis’s end goal is to introduce SAST methods and tools to detect the dominant type of software vulnerabilities in modern software. The investigation first focuses on state-of-theart SAST tools when working with large-scale modern software. The research examines how different state-of-the-art SAST tools react to different types of warnings over time, and measures SAST tools precision of different types of warnings. The study presumption is that the SAST tools’ precision can be obtained from studying real-world projects’ history and SAST tools that generated warnings over time. The empirical analysis in this study then takes a further step to look at the problem from a different angle, starting at the real-world vulnerabilities detected by individuals and published in well-known vulnerabilities databases. Android application vulnerabilities are used as an example of modern software vulnerabilities. This study aims to measure the recall of SAST tools when they work with modern software vulnerabilities and understand how software vulnerabilities manifest in the real world. We find that buffer errors that belong to the input validation and representation class of vulnerability dominate modern software. Also, we find that studied state-of-the-art SAST tools failed to identify real-world vulnerabilities. To address the issue of detecting vulnerabilities in modern software, we introduce two methodologies. The first methodology is a coarse-grain method that targets helping taint static analysis methods to tackle two aspects of the complexity of modern software. One aspect is that one vulnerability can be scattered across different languages in a single application making the analysis harder to achieve. The second aspect is that the number of sources and sinks is high and increasing over time, which can be hard for taint analysis to cover such a high number of sources and sinks. We implement the proposed methodology in a tool called Source Sink (SoS) that filters out the source and sink pairs that do not have feasible paths. Then, another fine-grain methodology focuses on discovering buffer errors that occur in modern software. The method performs taint analysis to examine the reachability between sources and sinks and looks for "validators" that validates the untrusted input. We implemented methodology in a tool called Buffer Error Finder (BEFinder)

    Static Analysis in Practice

    Get PDF
    Static analysis tools search software looking for defects that may cause an application to deviate from its intended behavior. These include defects that compute incorrect values, cause runtime exceptions or crashes, expose applications to security vulnerabilities, or lead to performance degradation. In an ideal world, the analysis would precisely identify all possible defects. In reality, it is not always possible to infer the intent of a software component or code fragment, and static analysis tools sometimes output spurious warnings or miss important bugs. As a result, tool makers and researchers focus on developing heuristics and techniques to improve speed and accuracy. But, in practice, speed and accuracy are not sufficient to maximize the value received by software makers using static analysis. Software engineering teams need to make static analysis an effective part of their regular process. In this dissertation, I examine the ways static analysis is used in practice by commercial and open source users. I observe that effectiveness is hampered, not only by false warnings, but also by true defects that do not affect software behavior in practice. Indeed, mature production systems are often littered with true defects that do not prevent them from functioning, mostly correctly. To understand why this occurs, observe that developers inadvertently create both important and unimportant defects when they write software, but most quality assurance activities are directed at finding the important ones. By the time the system is mature, there may still be a few consequential defects that can be found by static analysis, but they are drowned out by the many true but low impact defects that were never fixed. An exception to this rule is certain classes of subtle security, performance, or concurrency defects that are hard to detect without static analysis. Software teams can use static analysis to find defects very early in the process, when they are cheapest to fix, and in so doing increase the effectiveness of later quality assurance activities. But this effort comes with costs that must be managed to ensure static analysis is worthwhile. The cost effectiveness of static analysis also depends on the nature of the defect being sought, the nature of the application, the infrastructure supporting tools, and the policies governing its use. Through this research, I interact with real users through surveys, interviews, lab studies, and community-wide reviews, to discover their perspectives and experiences, and to understand the costs and challenges incurred when adopting static analysis tools. I also analyze the defects found in real systems and make observations about which ones are fixed, why some seemingly serious defects persist, and what considerations static analysis tools and software teams should make to increase effectiveness. Ultimately, my interaction with real users confirms that static analysis is well received and useful in practice, but the right environment is needed to maximize its return on investment

    Análise estática de segurança de código-fonte : abordagem para avaliação de ferramentas

    Get PDF
    Monografia (graduação)—Universidade de Brasília, Faculdade UnB Gama, Curso de Engenharia de Software, 2015.Uma das técnicas para se reduzir o número de vulnerabilidades presentes em software é a detecção e resolução de defeitos presentes no código fonte do mesmo. A detecção de tais defeitos pode ser automatizada com ferramentas de análise estática de segurança de código fonte. O presente trabalho apresenta uma abordagem para que se avaliem ferramentas de análise estática de segurança de código fonte, desde a seleção das ferramentas a serem avaliadas à apresentação dos resultados da avaliação. Por fim, o trabalho apresenta exemplos de uso da abordagem apresentada, onde se avaliam a capacidade de detecção de loops infinitos por algumas ferramentas Livres e a corretude da análise de uma ferramenta que se propõe a realizar análises corretas.One technique to reduce the number of vulnerabilities found in software is the detection and resolution of flaws present in the source code. The detection of these flaws may be automated with source code security static analysis tools. This work presents an approach to evaluate source code security static analysis tools, since the selection of the tools being evaluated to the presentation of the evaluation results. Later, it presents usage examples for the proposed approach, the ability of some Free (as in freedom) static analyzers to detect infinite loops and the soundess of the analysis of a tool that claims to be sound are evaluated

    Tools and Experiments for Software Security

    Get PDF
    The computer security problems that we face begin in computer programs that we write. The exploitation of vulnerabilities that leads to the theft of private information and other nefarious activities often begins with a vulnerability accidentally created in a computer program by that program's author. What are the factors that lead to the creation of these vulnerabilities? Software development and programming is in part a synthetic activity that we can control with technology, i.e. different programming languages and software development tools. Does changing the technology used to program software help programmers write more secure code? Can we create technology that will help programmers make fewer mistakes? This dissertation examines these questions. We start with the Build It Break It Fix It project, a security focused programming competition. This project provides data on software security problems by allowing contestants to write security focused software in any programming language. We discover that using C leads to memory safety issues that can compromise security. Next, we consider making C safer. We develop and examine the Checked C programming language, a strict super-set of C that adds types for spatial safety. We also introduce an automatic re-writing tool that can convert C code into Checked C code. We evaluate the approach overall on benchmarks used by prior work on making C safer. We then consider static analysis. After an examination of different parameters of numeric static analyzers, we develop a disjunctive abstract domain that uses a novel merge heuristic, a notion of volumetric difference, either approximated via MCMC sampling or precisely computed via conical decomposition. This domain is implemented in a static analyzer for C programs and evaluated. After static analysis, we consider fuzzing. We consider what it takes to perform a good evaluation of a fuzzing technique with our own experiments and a review of recent fuzzing papers. We develop a checklist for conducting new fuzzing research and a general strategy for identifying root causes of failure found during fuzzing. We evaluate new root cause analysis approaches using coverage information as inputs to statistical clustering algorithms
    corecore