50 research outputs found
On the Incomparability of Cache Algorithms in Terms of Timing Leakage
Modern computer architectures rely on caches to reduce the latency gap
between the CPU and main memory. While indispensable for performance, caches
pose a serious threat to security because they leak information about memory
access patterns of programs via execution time.
In this paper, we present a novel approach for reasoning about the security
of cache algorithms with respect to timing leaks. The basis of our approach is
the notion of leak competitiveness, which compares the leakage of two cache
algorithms on every possible program. Based on this notion, we prove the
following two results:
First, we show that leak competitiveness is symmetric in the cache
algorithms. This implies that no cache algorithm dominates another in terms of
leakage via a program's total execution time. This is in contrast to
performance, where it is known that such dominance relationships exist.
Second, when restricted to caches with finite control, the
leak-competitiveness relationship between two cache algorithms is either
asymptotically linear or constant. No other shapes are possible
Transformational typing and unification for automatically correcting insecure programs
Before starting a rigorous security analysis of a given software system, the most likely outcome is often already clear, namely that the system is not entirely secure. Modifying a program such that it passes the analysis is a difficult problem and usually left entirely to the programmer. In this article, we show that and how unification can be used to compute such program transformations. This opens a new perspective on the problem of correcting insecure programs. We also demonstrate that integrating our approach into an existing transforming type system can improve the precision of the analysis and the quality of the resulting program
A language-theoretic view on network protocols
Input validation is the first line of defense against malformed or malicious inputs. It is therefore critical that the validator (which is often part of the parser) is free of bugs. To build dependable input validators, we propose using parser generators for context-free languages. In the context of network protocols, various works have pointed at context-free languages as falling short to specify precisely or concisely common idioms found in protocols. We review those assessments and perform a rigorous, language-theoretic analysis of several common protocol idioms. We then demonstrate the practical value of our findings by developing a modular, robust, and efficient input validator for HTTP relying on context-free grammars and regular expressions
Non-Uniform Distributions in Quantitative Information-Flow
Quantitative information-flow analysis (QIF) determines the amount of information that a program leaks about its secret inputs. For this, QIF requires an assumption about the distribution of the se-cret inputs. Existing techniques either consider the worst-case over a (sub-)set of all input distributions and thereby over-approximate the amount of leaked information; or they are tailored to reason-ing about uniformly distributed inputs and are hence not directly applicable to non-uniform use-cases; or they deal with explicitly represented distributions, for which suitable abstraction techniques are only now emerging. In this paper we propose a novel approach for a precise QIF with respect to non-uniform input distributions: We present a reduction technique that transforms the problem of QIF w.r.t. non-uniform distributions into the problem of QIF for the uniform case. This reduction enables us to directly apply ex-isting techniques for uniform QIF to the non-uniform case. We furthermore show that quantitative information flow is robust with respect to variations of the input distribution. This result allows us to perform QIF based on approximate input distributions, which can significantly simplify the analysis. Finally, we perform a case study where we illustrate our techniques by using them to analyze an integrity check on non-uniformly distributed PINs, as they are used for banking