444 research outputs found
EF/CF: High Performance Smart Contract Fuzzing for Exploit Generation
Smart contracts are increasingly being used to manage large numbers of
high-value cryptocurrency accounts. There is a strong demand for automated,
efficient, and comprehensive methods to detect security vulnerabilities in a
given contract. While the literature features a plethora of analysis methods
for smart contracts, the existing proposals do not address the increasing
complexity of contracts. Existing analysis tools suffer from false alarms and
missed bugs in today's smart contracts that are increasingly defined by
complexity and interdependencies. To scale accurate analysis to modern smart
contracts, we introduce EF/CF, a high-performance fuzzer for Ethereum smart
contracts. In contrast to previous work, EF/CF efficiently and accurately
models complex smart contract interactions, such as reentrancy and
cross-contract interactions, at a very high fuzzing throughput rate. To achieve
this, EF/CF transpiles smart contract bytecode into native C++ code, thereby
enabling the reuse of existing, optimized fuzzing toolchains. Furthermore,
EF/CF increases fuzzing efficiency by employing a structure-aware mutation
engine for smart contract transaction sequences and using a contract's ABI to
generate valid transaction inputs. In a comprehensive evaluation, we show that
EF/CF scales better -- without compromising accuracy -- to complex contracts
compared to state-of-the-art approaches, including other fuzzers,
symbolic/concolic execution, and hybrid approaches. Moreover, we show that
EF/CF can automatically generate transaction sequences that exploit reentrancy
bugs to steal Ether.Comment: To be published at Euro S&P 202
A COLLISION AVOIDANCE SYSTEM FOR AUTONOMOUS UNDERWATER VEHICLES
The work in this thesis is concerned with the development of a novel and practical collision
avoidance system for autonomous underwater vehicles (AUVs). Synergistically,
advanced stochastic motion planning methods, dynamics quantisation approaches,
multivariable tracking controller designs, sonar data processing and workspace representation,
are combined to enhance significantly the survivability of modern AUVs.
The recent proliferation of autonomous AUV deployments for various missions such
as seafloor surveying, scientific data gathering and mine hunting has demanded a substantial
increase in vehicle autonomy. One matching requirement of such missions is
to allow all the AUV to navigate safely in a dynamic and unstructured environment.
Therefore, it is vital that a robust and effective collision avoidance system should be
forthcoming in order to preserve the structural integrity of the vehicle whilst simultaneously
increasing its autonomy.
This thesis not only provides a holistic framework but also an arsenal of computational
techniques in the design of a collision avoidance system for AUVs. The
design of an obstacle avoidance system is first addressed. The core paradigm is the
application of the Rapidly-exploring Random Tree (RRT) algorithm and the newly
developed version for use as a motion planning tool. Later, this technique is merged
with the Manoeuvre Automaton (MA) representation to address the inherent disadvantages
of the RRT. A novel multi-node version which can also address time varying
final state is suggested. Clearly, the reference trajectory generated by the aforementioned
embedded planner must be tracked. Hence, the feasibility of employing the
linear quadratic regulator (LQG) and the nonlinear kinematic based state-dependent
Ricatti equation (SDRE) controller as trajectory trackers are explored.
The obstacle detection module, which comprises of sonar processing and workspace
representation submodules, is developed and tested on actual sonar data acquired
in a sea-trial via a prototype forward looking sonar (AT500). The sonar processing
techniques applied are fundamentally derived from the image processing perspective.
Likewise, a novel occupancy grid using nonlinear function is proposed for the
workspace representation of the AUV. Results are presented that demonstrate the
ability of an AUV to navigate a complex environment.
To the author's knowledge, it is the first time the above newly developed methodologies
have been applied to an A UV collision avoidance system, and, therefore, it is
considered that the work constitutes a contribution of knowledge in this area of work.J&S MARINE LT
7. GI/ITG KuVS Fachgespräch Drahtlose Sensornetze
In dem vorliegenden Tagungsband sind die Beiträge des Fachgesprächs Drahtlose Sensornetze 2008 zusammengefasst. Ziel dieses Fachgesprächs ist es, Wissenschaftlerinnen und Wissenschaftler aus diesem Gebiet die Möglichkeit zu einem informellen Austausch zu geben – wobei immer auch Teilnehmer aus der Industrieforschung willkommen sind, die auch in diesem Jahr wieder teilnehmen.Das Fachgespräch ist eine betont informelle Veranstaltung der GI/ITG-Fachgruppe „Kommunikation und Verteilte Systeme“ (www.kuvs.de). Es ist ausdrücklich keine weitere Konferenz mit ihrem großen Overhead und der Anforderung, fertige und möglichst „wasserdichte“ Ergebnisse zu präsentieren, sondern es dient auch ganz explizit dazu, mit Neueinsteigern auf der Suche nach ihrem Thema zu diskutieren und herauszufinden, wo die Herausforderungen an die zukünftige Forschung überhaupt liegen.Das Fachgespräch Drahtlose Sensornetze 2008 findet in Berlin statt, in den Räumen der Freien Universität Berlin, aber in Kooperation mit der ScatterWeb GmbH. Auch dies ein Novum, es zeigt, dass das Fachgespräch doch deutlich mehr als nur ein nettes Beisammensein unter einem Motto ist.Für die Organisation des Rahmens und der Abendveranstaltung gebührt Dank den beiden Mitgliedern im Organisationskomitee, Kirsten Terfloth und Georg Wittenburg, aber auch Stefanie Bahe, welche die redaktionelle Betreuung des Tagungsbands übernommen hat, vielen anderen Mitgliedern der AG Technische Informatik der FU Berlin und natürlich auch ihrem Leiter, Prof. Jochen Schiller
THE SCALABLE AND ACCOUNTABLE BINARY CODE SEARCH AND ITS APPLICATIONS
The past decade has been witnessing an explosion of various applications and devices.
This big-data era challenges the existing security technologies: new analysis techniques
should be scalable to handle “big data” scale codebase; They should be become smart
and proactive by using the data to understand what the vulnerable points are and where
they locate; effective protection will be provided for dissemination and analysis of the data
involving sensitive information on an unprecedented scale.
In this dissertation, I argue that the code search techniques can boost existing security
analysis techniques (vulnerability identification and memory analysis) in terms of scalability and accuracy. In order to demonstrate its benefits, I address two issues of code search by using the code analysis: scalability and accountability. I further demonstrate the benefit of code search by applying it for the scalable vulnerability identification [57] and the
cross-version memory analysis problems [55, 56].
Firstly, I address the scalability problem of code search by learning “higher-level” semantic
features from code [57]. Instead of conducting fine-grained testing on a single device
or program, it becomes much more crucial to achieve the quick vulnerability scanning
in devices or programs at a “big data” scale. However, discovering vulnerabilities in “big
code” is like finding a needle in the haystack, even when dealing with known vulnerabilities. This new challenge demands a scalable code search approach. To this end, I leverage successful techniques from the image search in computer vision community and propose a novel code encoding method for scalable vulnerability search in binary code. The evaluation results show that this approach can achieve comparable or even better accuracy and efficiency than the baseline techniques.
Secondly, I tackle the accountability issues left in the vulnerability searching problem
by designing vulnerability-oriented raw features [58]. The similar code does not always
represent the similar vulnerability, so it requires that the feature engineering for the code
search should focus on semantic level features rather than syntactic ones. I propose to
extract conditional formulas as higher-level semantic features from the raw binary code to
conduct the code search. A conditional formula explicitly captures two cardinal factors
of a vulnerability: 1) erroneous data dependencies and 2) missing or invalid condition
checks. As a result, the binary code search on conditional formulas produces significantly
higher accuracy and provides meaningful evidence for human analysts to further examine
the search results. The evaluation results show that this approach can further improve
the search accuracy of existing bug search techniques with very reasonable performance
overhead.
Finally, I demonstrate the potential of the code search technique in the memory analysis
field, and apply it to address their across-version issue in the memory forensic problem
[55, 56]. The memory analysis techniques for COTS software usually rely on the
so-called “data structure profiles” for their binaries. Construction of such profiles requires
the expert knowledge about the internal working of a specified software version. However,
it is still a cumbersome manual effort most of time. I propose to leverage the code search
technique to enable a notion named “cross-version memory analysis”, which can update a
profile for new versions of a software by transferring the knowledge from the model that
has already been trained on its old version. The evaluation results show that the code search based approach advances the existing memory analysis methods by reducing the
manual efforts while maintaining the reasonable accuracy. With the help of collaborators, I
further developed two plugins to the Volatility memory forensic framework [2], and show
that each of the two plugins can construct a localized profile to perform specified memory
forensic tasks on the same memory dump, without the need of manual effort in creating the corresponding profile
Abstractions and optimisations for model-checking software-defined networks
Software-Defined Networking introduces a new programmatic abstraction layer by shifting the distributed network functions (NFs) from silicon chips (ASICs) to a logically centralized (controller) program. And yet, controller programs are a common source of bugs that can cause performance degradation, security exploits and poor reliability in networks. Assuring that a controller program satisfies the specifications is thus most preferable, yet the size of the network and the complexity of the controller makes this a challenging effort.
This thesis presents a highly expressive, optimised SDN model, (code-named MoCS), that can be reasoned about and verified formally in an acceptable timeframe. In it, we introduce reusable abstractions that (i) come with a rich semantics, for capturing subtle real-world bugs that are hard to track down, and (ii) which are formally proved correct. In addition, MoCS deals with timeouts of flow table entries, thus supporting automatic state refresh (soft state) in the network. The optimisations are achieved by (1) contextually analysing the model for possible partial order reductions in view of the concrete control program, network topology and specification property in question, (2) pre-computing packet equivalence classes and (3) indexing packets and rules that exist in the model and bit-packing (compressing) them.
Each of these developments is demonstrated by a set of real-world controller programs that have been implemented in network topologies of varying size, and publicly released under an open-source license
CONFPROFITT: A CONFIGURATION-AWARE PERFORMANCE PROFILING, TESTING, AND TUNING FRAMEWORK
Modern computer software systems are complicated. Developers can change the behavior of the software system through software configurations. The large number of configuration option and their interactions make the task of software tuning, testing, and debugging very challenging. Performance is one of the key aspects of non-functional qualities, where performance bugs can cause significant performance degradation and lead to poor user experience. However, performance bugs are difficult to expose, primarily because detecting them requires specific inputs, as well as specific configurations. While researchers have developed techniques to analyze, quantify, detect, and fix performance bugs, many of these techniques are not effective in highly-configurable systems. To improve the non-functional qualities of configurable software systems, testing engineers need to be able to understand the performance influence of configuration options, adjust the performance of a system under different configurations, and detect configuration-related performance bugs.
This research will provide an automated framework that allows engineers to effectively analyze performance-influence configuration options, detect performance bugs in highly-configurable software systems, and adjust configuration options to achieve higher long-term performance gains. To understand real-world performance bugs in highly-configurable software systems, we first perform a performance bug characteristics study from three large-scale opensource projects. Many researchers have studied the characteristics of performance bugs from the bug report but few have reported what the experience is when trying to replicate confirmed performance bugs from the perspective of non-domain experts such as researchers. This study is meant to report the challenges and potential workaround to replicate confirmed performance bugs. We also want to share a performance benchmark to provide real-world performance bugs to evaluate future performance testing techniques. Inspired by our performance bug study, we propose a performance profiling approach that can help developers to understand how configuration options and their interactions can influence the performance of a system. The approach uses a combination of dynamic analysis and machine learning techniques, together with configuration sampling techniques, to profile the program execution, analyze configuration options relevant to performance. Next, the framework leverages natural language processing and information retrieval techniques to automatically generate test inputs and configurations to expose performance bugs. Finally, the framework combines reinforcement learning and dynamic state reduction techniques to guide subject application towards achieving higher long-term performance gains
- …