507 research outputs found
Stronger computational modelling of signalling pathways using both continuous and discrete-state methods
Starting from a biochemical signalling pathway model expresses in a process algebra enriched with quantitative information, we automatically derive both continuous-space and discrete-space representations suitable for numerical evaluation. We compare results obtained using approximate stochastic simulation thereby exposing a flaw in the use of the differentiation procedure producing misleading results
Probabilistic data flow analysis: a linear equational approach
Speculative optimisation relies on the estimation of the probabilities that
certain properties of the control flow are fulfilled. Concrete or estimated
branch probabilities can be used for searching and constructing advantageous
speculative and bookkeeping transformations.
We present a probabilistic extension of the classical equational approach to
data-flow analysis that can be used to this purpose. More precisely, we show
how the probabilistic information introduced in a control flow graph by branch
prediction can be used to extract a system of linear equations from a program
and present a method for calculating correct (numerical) solutions.Comment: In Proceedings GandALF 2013, arXiv:1307.416
THE SCALABLE AND ACCOUNTABLE BINARY CODE SEARCH AND ITS APPLICATIONS
The past decade has been witnessing an explosion of various applications and devices.
This big-data era challenges the existing security technologies: new analysis techniques
should be scalable to handle “big data” scale codebase; They should be become smart
and proactive by using the data to understand what the vulnerable points are and where
they locate; effective protection will be provided for dissemination and analysis of the data
involving sensitive information on an unprecedented scale.
In this dissertation, I argue that the code search techniques can boost existing security
analysis techniques (vulnerability identification and memory analysis) in terms of scalability and accuracy. In order to demonstrate its benefits, I address two issues of code search by using the code analysis: scalability and accountability. I further demonstrate the benefit of code search by applying it for the scalable vulnerability identification [57] and the
cross-version memory analysis problems [55, 56].
Firstly, I address the scalability problem of code search by learning “higher-level” semantic
features from code [57]. Instead of conducting fine-grained testing on a single device
or program, it becomes much more crucial to achieve the quick vulnerability scanning
in devices or programs at a “big data” scale. However, discovering vulnerabilities in “big
code” is like finding a needle in the haystack, even when dealing with known vulnerabilities. This new challenge demands a scalable code search approach. To this end, I leverage successful techniques from the image search in computer vision community and propose a novel code encoding method for scalable vulnerability search in binary code. The evaluation results show that this approach can achieve comparable or even better accuracy and efficiency than the baseline techniques.
Secondly, I tackle the accountability issues left in the vulnerability searching problem
by designing vulnerability-oriented raw features [58]. The similar code does not always
represent the similar vulnerability, so it requires that the feature engineering for the code
search should focus on semantic level features rather than syntactic ones. I propose to
extract conditional formulas as higher-level semantic features from the raw binary code to
conduct the code search. A conditional formula explicitly captures two cardinal factors
of a vulnerability: 1) erroneous data dependencies and 2) missing or invalid condition
checks. As a result, the binary code search on conditional formulas produces significantly
higher accuracy and provides meaningful evidence for human analysts to further examine
the search results. The evaluation results show that this approach can further improve
the search accuracy of existing bug search techniques with very reasonable performance
overhead.
Finally, I demonstrate the potential of the code search technique in the memory analysis
field, and apply it to address their across-version issue in the memory forensic problem
[55, 56]. The memory analysis techniques for COTS software usually rely on the
so-called “data structure profiles” for their binaries. Construction of such profiles requires
the expert knowledge about the internal working of a specified software version. However,
it is still a cumbersome manual effort most of time. I propose to leverage the code search
technique to enable a notion named “cross-version memory analysis”, which can update a
profile for new versions of a software by transferring the knowledge from the model that
has already been trained on its old version. The evaluation results show that the code search based approach advances the existing memory analysis methods by reducing the
manual efforts while maintaining the reasonable accuracy. With the help of collaborators, I
further developed two plugins to the Volatility memory forensic framework [2], and show
that each of the two plugins can construct a localized profile to perform specified memory
forensic tasks on the same memory dump, without the need of manual effort in creating the corresponding profile
Understanding Uncertainty in Static Pointer Analysis
Institute for Computing Systems ArchitectureFor programs that make extensive use of pointers, pointer analysis is often critical
for the effectiveness of optimising compilers and tools for reasoning about program
behaviour and correctness. Static pointer analysis has been extensively studied and
several algorithms have been proposed, but these only provide approximate solutions.
As such inaccuracy may hinder further optimisations, it is important to understand
how short these algorithms come of providing accurate information about the points-to
relations.
This thesis attempts to quantify the amount of uncertainty of the points-to relations
that remains after a state-of-the-art context- and flow-sensitive pointer analysis algorithm
is applied to a collection of programs from two well-known benchmark suites:
SPEC integer and MediaBench. This remaining static uncertainty is then compared
to the run-time behaviour. Unlike previous work that compared run-time behaviour
against less accurate context- and flow-insensitive algorithms, the goal of this work is
to quantify the amount of uncertainty that is intrinsic to the applications and that defeat
even the most accurate static analyses.
In a first step to quantify the uncertainties, a compiler framework was proposed and
implemented. It is based on the SUIF1 research compiler framework and the SPAN
pointer analysis package. This framework was then used to collect extensive data
from the static points-to analysis. It was also used to drive a profiled execution of the
programs in order to collect the real run-time points-to data. Finally, the static and the
run-time data were compared.
Experimental results show that often the static pointer analysis is very accurate, but
for some benchmarks a significant fraction, up to 25%, of their accesses via pointer dereferences
cannot be statically fully disambiguated. We find that some 27% of these
de-references turn out to access a single memory location at run time, but many do
access several different memory locations. We find that the main reasons for this are
the use of pointer arithmetic and the fact that some control paths are not taken. The
latter is an example of a source of uncertainty that is intrinsic to the application
Probabilistic Modelling of Classical and Quantum Systems
While probabilistic modelling has been widely used in the last decades, the quantitative prediction in stochastic modelling of real physical problems remains a great challenge and requires sophisticated mathematical models and advanced numerical algorithms.
In this study, we developed the mathematical tools for solving three long-standing problems in Polymer Science and Quantum Measurement theory.
The question, “Why kinetic models cannot reproduce experimental observations in Controlled Radical Polymerization (CRP)?” has been answered by introducing in the kinetic model a delay and treating CRP as a non-Markovian process. The efficient stochastic simulation (SS) approach allowing for an accurate description of CRP has been formulated, theoretically grounded and tested using experimental data and the less advanced SS algorithms.
An accurate prediction of a morphology development in multi-phase polymers is vital for synthesis of new materials but still not feasible due to its complexity. We proposed a Population Balance Equations (PBE)-based model and derived a conceptually new and computationally tractable numerical approach for its solution in order to provide a systematic tool for a morphology prediction in composite polymers.
Finally, we designed a stochastic simulation framework for continuous measurements performed on quantum systems of theoretical and experimental interest, which helped us to re-examine the “fuzzy continuous measurements” theory by Audretsch and Mensky (1997) and expose some of its deficiencies, while making amendments where necessary.
All developed modelling approaches are general enough to be applied to the broad range of physical applications and thus ultimately to contribute to the understanding and prediction of complex chemical and physical processes.BES-2014-06864, MTM2013-46553-C3-1-
- …