230 research outputs found

    Decompiling x86 Deep Neural Network Executables

    Full text link
    Due to their widespread use on heterogeneous hardware devices, deep learning (DL) models are compiled into executables by DL compilers to fully leverage low-level hardware primitives. This approach allows DL computations to be undertaken at low cost across a variety of computing platforms, including CPUs, GPUs, and various hardware accelerators. We present BTD (Bin to DNN), a decompiler for deep neural network (DNN) executables. BTD takes DNN executables and outputs full model specifications, including types of DNN operators, network topology, dimensions, and parameters that are (nearly) identical to those of the input models. BTD delivers a practical framework to process DNN executables compiled by different DL compilers and with full optimizations enabled on x86 platforms. It employs learning-based techniques to infer DNN operators, dynamic analysis to reveal network architectures, and symbolic execution to facilitate inferring dimensions and parameters of DNN operators. Our evaluation reveals that BTD enables accurate recovery of full specifications of complex DNNs with millions of parameters (e.g., ResNet). The recovered DNN specifications can be re-compiled into a new DNN executable exhibiting identical behavior to the input executable. We show that BTD can boost two representative attacks, adversarial example generation and knowledge stealing, against DNN executables. We also demonstrate cross-architecture legacy code reuse using BTD, and envision BTD being used for other critical downstream tasks like DNN security hardening and patching.Comment: The extended version of a paper to appear in the Proceedings of the 32nd USENIX Security Symposium, 2023, (USENIX Security '23), 25 page

    t|ket> : A retargetable compiler for NISQ devices

    Get PDF
    We present t|ket>, a quantum software development platform produced by Cambridge Quantum Computing Ltd. The heart of t|ket> is a language-agnostic optimising compiler designed to generate code for a variety of NISQ devices, which has several features designed to minimise the influence of device error. The compiler has been extensively benchmarked and outperforms most competitors in terms of circuit optimisation and qubit routing

    Probabilistic Naming of Functions in Stripped Binaries

    Get PDF
    Debugging symbols in binary executables carry the names of functions and global variables. When present, they greatly simplify the process of reverse engineering, but they are almost always removed (stripped) for deployment. We present the design and implementation of punstrip, a tool which combines a probabilistic fingerprint of binary code based on high-level features with a probabilistic graphical model to learn the relationship between function names and program structure. As there are many naming conventions and developer styles, functions from different applications do not necessarily have the exact same name, even if they implement the exact same functionality. We therefore evaluate punstrip across three levels of name matching: exact; an approach based on natural language processing of name components; and using Symbol2Vec, a new embedding of function names based on random walks of function call graphs. We show that our approach is able to recognize functions compiled across different compilers and optimization levels and then demonstrate that punstrip can predict semantically similar function names based on code structure. We evaluate our approach over open source C binaries from the Debian Linux distribution and compare against the state of the art

    Verificare: a platform for composable verification with application to SDN-Enabled systems

    Full text link
    Software-Defined Networking (SDN) has become increasing prevalent in both the academic and industrial communities. A new class of system built on SDNs, which we refer to as SDN-Enabled, provide programmatic interfaces between the SDN controller and the larger distributed system. Existing tools for SDN verification and analysis are insufficiently expressive to capture this composition of a network and a larger distributed system. Generic verification systems are an infeasible solution, due to their monolithic approach to modeling and rapid state-space explosion. In this thesis we present a new compositional approach to system modeling and verification that is particularly appropriate for SDN-Enabled systems. Compositional models may have sub-components (such as switches and end-hosts) modified, added, or removed with only minimal, isolated changes. Furthermore, invariants may be defined over the composed system that restrict its behavior, allowing assumptions to be added or removed and for components to be abstracted away into the service guarantee that they provide (such as guaranteed packet arrival). Finally, compositional modeling can minimize the size of the state space to be verified by taking advantage of known model structure. We also present the Verificare platform, a tool chain for building compositional models in our modeling language and automatically compiling them to multiple off-the-shelf verification tools. The compiler outputs a minimal, calculus-oblivious formalism, which is accessed by plugins via a translation API. This enables a wide variety of requirements to be verified. As new tools become available, the translator can easily be extended with plugins to support them

    Revisiting Binary Code Similarity Analysis using Interpretable Feature Engineering and Lessons Learned

    Full text link
    Binary code similarity analysis (BCSA) is widely used for diverse security applications such as plagiarism detection, software license violation detection, and vulnerability discovery. Despite the surging research interest in BCSA, it is significantly challenging to perform new research in this field for several reasons. First, most existing approaches focus only on the end results, namely, increasing the success rate of BCSA, by adopting uninterpretable machine learning. Moreover, they utilize their own benchmark sharing neither the source code nor the entire dataset. Finally, researchers often use different terminologies or even use the same technique without citing the previous literature properly, which makes it difficult to reproduce or extend previous work. To address these problems, we take a step back from the mainstream and contemplate fundamental research questions for BCSA. Why does a certain technique or a feature show better results than the others? Specifically, we conduct the first systematic study on the basic features used in BCSA by leveraging interpretable feature engineering on a large-scale benchmark. Our study reveals various useful insights on BCSA. For example, we show that a simple interpretable model with a few basic features can achieve a comparable result to that of recent deep learning-based approaches. Furthermore, we show that the way we compile binaries or the correctness of underlying binary analysis tools can significantly affect the performance of BCSA. Lastly, we make all our source code and benchmark public and suggest future directions in this field to help further research.Comment: 22 pages, under revision to Transactions on Software Engineering (July 2021
    corecore