209 research outputs found
ROPocop - Dynamic Mitigation of Code-Reuse Attacks
Control-flow attacks, usually achieved by exploiting a buffer-overflow
vulnerability, have been a serious threat to system security for over fifteen
years. Researchers have answered the threat with various mitigation techniques,
but nevertheless, new exploits that successfully bypass these technologies
still appear on a regular basis.
In this paper, we propose ROPocop, a novel approach for detecting and
preventing the execution of injected code and for mitigating code-reuse attacks
such as return-oriented programming (RoP). ROPocop uses dynamic binary
instrumentation, requiring neither access to source code nor debug symbols or
changes to the operating system. It mitigates attacks by both monitoring the
program counter at potentially dangerous points and by detecting suspicious
program flows.
We have implemented ROPocop for Windows x86 using PIN, a dynamic program
instrumentation framework from Intel. Benchmarks using the SPEC CPU2006 suite
show an average overhead of 2.4x, which is comparable to similar approaches,
which give weaker guarantees. Real-world applications show only an initially
noticeable input lag and no stutter. In our evaluation our tool successfully
detected all 11 of the latest real-world code-reuse exploits, with no false
alarms. Therefore, despite the overhead, it is a viable, temporary solution to
secure critical systems against exploits if a vendor patch is not yet
available
Analyzing the Gadgets Towards a Metric to Measure Gadget Quality
Current low-level exploits often rely on code-reuse, whereby short sections
of code (gadgets) are chained together into a coherent exploit that can be
executed without the need to inject any code. Several protection mechanisms
attempt to eliminate this attack vector by applying code transformations to
reduce the number of available gadgets. Nevertheless, it has emerged that the
residual gadgets can still be sufficient to conduct a successful attack.
Crucially, the lack of a common metric for "gadget quality" hinders the
effective comparison of current mitigations. This work proposes four metrics
that assign scores to a set of gadgets, measuring quality, usefulness, and
practicality. We apply these metrics to binaries produced when compiling
programs for architectures implementing Intel's recent MPX CPU extensions. Our
results demonstrate a 17% increase in useful gadgets in MPX binaries, and a
decrease in side-effects and preconditions, making them better suited for ROP
attacks.Comment: International Symposium on Engineering Secure Software and Systems,
Apr 2016, London, United Kingdo
Symbol-Specific Sparsification of Interprocedural Distributive Environment Problems
Previous work has shown that one can often greatly speed up static analysis
by computing data flows not for every edge in the program's control-flow graph
but instead only along definition-use chains. This yields a so-called sparse
static analysis. Recent work on SparseDroid has shown that specifically taint
analysis can be "sparsified" with extraordinary effectiveness because the taint
state of one variable does not depend on those of others. This allows one to
soundly omit more flow-function computations than in the general case.
In this work, we now assess whether this result carries over to the more
generic setting of so-called Interprocedural Distributive Environment (IDE)
problems. Opposed to taint analysis, IDE comprises distributive problems with
large or even infinitely broad domains, such as typestate analysis or linear
constant propagation. Specifically, this paper presents Sparse IDE, a framework
that realizes sparsification for any static analysis that fits the IDE
framework.
We implement Sparse IDE in SparseHeros, as an extension to the popular Heros
IDE solver, and evaluate its performance on real-world Java libraries by
comparing it to the baseline IDE algorithm. To this end, we design, implement
and evaluate a linear constant propagation analysis client on top of
SparseHeros. Our experiments show that, although IDE analyses can only be
sparsified with respect to symbols and not (numeric) values, Sparse IDE can
nonetheless yield significantly lower runtimes and often also memory
consumptions compared to the original IDE.Comment: To be published in ICSE 202
CrySL: An Extensible Approach to Validating the Correct Usage of Cryptographic APIs
Various studies have empirically shown that the majority of Java and Android apps misuse cryptographic libraries, causing devastating breaches of data security. It is crucial to detect such misuses early in the development process. To detect cryptography misuses, one must first define secure uses, a process mastered primarily by cryptography experts, and not by developers.
In this paper, we present CrySL, a definition language for bridging the cognitive gap between cryptography experts and developers. CrySL enables cryptography experts to specify the secure usage of the cryptographic libraries that they provide. We have implemented a compiler that translates such CrySL specification into a context-sensitive and flow-sensitive demand-driven static analysis. The analysis then helps developers by automatically checking a given Java or Android app for compliance with the CrySL-encoded rules.
We have designed an extensive CrySL rule set for the Java Cryptography Architecture (JCA), and empirically evaluated it by analyzing 10,000 current Android apps. Our results show that misuse of cryptographic APIs is still widespread, with 95% of apps containing at least one misuse. Our easily extensible CrySL rule set covers more violations than previous special-purpose tools with hard-coded rules, with our tooling offering a more precise analysis
CrySL: An Extensible Approach to Validating the Correct Usage of Cryptographic APIs (Artifact)
In this artefact, we present CrySL, an extensible approach to validating the
correct usage of cryptographic APIs. The artefact contains executables
for CogniCrypt_{SAST}, the analysis CrySL-based analysis, along with the CrySL rules we used in in the original paper\u27s experiments. We also provide scripts to re-run the experiments. We finally include a tutorial to showcase the CogniCrypt_{SAST} on a small Java target program
Dealing with Variability in API Misuse Specification
APIs are the primary mechanism for developers to gain access to externally defined services and tools. However, previous research has revealed API misuses that violate the contract of APIs to be prevalent. Such misuses can have harmful consequences, especially in the context of cryptographic libraries. Various API-misuse detectors have been proposed to address this issue - including CogniCrypt, one of the most versatile of such detectors and that uses a language (CrySL) to specify cryptographic API usage contracts. Nonetheless, existing approaches to detect API misuse had not been designed for systematic reuse, ignoring the fact that different versions of a library, different versions of a platform, and different recommendations/guidelines might introduce variability in the correct usage of an API. Yet, little is known about how such variability impacts the specification of the correct API usage. This paper investigates this question by analyzing the impact of various sources of variability on widely used Java cryptographic libraries (including JCA/JCE, Bouncy Castle, and Google Tink). The results of our investigation show that sources of variability like new versions of the API and security standards significantly impact the specifications. We then use the insights gained from our investigation to motivate an extension to the CrySL language (named MetaCrySL), which builds on meta-programming concepts. We evaluate MetaCrySL by specifying usage rules for a family of Android versions and illustrate that MetaCrySL can model all forms of variability we identified and drastically reduce the size of a family of specifications for the correct usage of cryptographic APIs
Lossless, Persisted Summarization of Static Callgraph, Points-To and Data-Flow Analysis
Static analysis is used to automatically detect bugs and security breaches, and aids compiler optimization. Whole-program analysis (WPA) can yield high precision, however causes long analysis times and thus does not match common software-development workflows, making it often impractical to use for large, real-world applications.
This paper thus presents the design and implementation of ModAlyzer, a novel static-analysis approach that aims at accelerating whole-program analysis by making the analysis modular and compositional. It shows how to compute lossless, persisted summaries for callgraph, points-to and data-flow information, and it reports under which circumstances this function-level compositional analysis outperforms WPA.
We implemented ModAlyzer as an extension to LLVM and PhASAR, and applied it to 12 real-world C and C++ applications. At analysis time, ModAlyzer modularly and losslessly summarizes the analysis effect of the library code those applications share, hence avoiding its repeated re-analysis. The experimental results show that the reuse of these summaries can save, on average, 72% of analysis time over WPA. Moreover, because it is lossless, the module-wise analysis fully retains precision and recall. Surprisingly, as our results show, it sometimes even yields precision superior to WPA. The initial summary generation, on average, takes about 3.67 times as long as WPA
Boomerang: Demand-Driven Flow- and Context-Sensitive Pointer Analysis for Java (Artifact)
Evaluating pointer analyses with respect to soundness and precision has been a tedious task. Within this artifact we present PointerBench, the benchmark suite used in the paper to compare the pointer analysis Boomerang with two other demand-driven pointer analyses, SB [Sridharan and Bodik, 2006] and DA [Yan et al., 2011]. We show PointerBench can be used to test different pointer analyses.
In addition to that, the artifact contains usage examples for Boomerang on simple test programs. The test programs and the input on these programs to Boomerang can be changed to experiment with the algorithm and its features.
Additionally, the artifact contains the integration of Boomerang, SB, and DA into FlowDroid, which can then be executed on arbitrary Android applications
Boomerang: Demand-Driven Flow- and Context-Sensitive Pointer Analysis for Java
Many current program analyses require highly precise pointer
information about small, tar- geted parts of a given program. This
motivates the need for demand-driven pointer analyses that compute
information only where required. Pointer analyses generally compute
points-to sets of program variables or answer boolean alias
queries. However, many client analyses require richer pointer
information. For example, taint and typestate analyses often need to
know the set of all aliases of a given variable under a certain
calling context. With most current pointer analyses, clients must
compute such information through repeated points-to or alias queries, increasing complexity and computation time for them.
This paper presents Boomerang, a demand-driven, flow-, field-, and
context-sensitive pointer analysis for Java programs. Boomerang
computes rich results that include both the possible allocation sites of a given pointer (points-to information) and all pointers that can point to those allocation sites (alias information). For increased precision and scalability, clients can query Boomerang with respect to particular calling contexts of interest.
Our experiments show that Boomerang is more precise than existing
demand-driven pointer analyses. Additionally, using Boomerang, the
taint analysis FlowDroid issues up to 29.4x fewer pointer queries
compared to using other pointer analyses that return simpler pointer
infor- mation. Furthermore, the search space of Boomerang can be
significantly reduced by requesting calling contexts from the client
analysis
- …