2 research outputs found
GraphMoco:a Graph Momentum Contrast Model that Using Multimodel Structure Information for Large-scale Binary Function Representation Learning
In the field of cybersecurity, the ability to compute similarity scores at
the function level is import. Considering that a single binary file may contain
an extensive amount of functions, an effective learning framework must exhibit
both high accuracy and efficiency when handling substantial volumes of data.
Nonetheless, conventional methods encounter several limitations. Firstly,
accurately annotating different pairs of functions with appropriate labels
poses a significant challenge, thereby making it difficult to employ supervised
learning methods without risk of overtraining on erroneous labels. Secondly,
while SOTA models often rely on pre-trained encoders or fine-grained graph
comparison techniques, these approaches suffer from drawbacks related to time
and memory consumption. Thirdly, the momentum update algorithm utilized in
graph-based contrastive learning models can result in information leakage.
Surprisingly, none of the existing articles address this issue. This research
focuses on addressing the challenges associated with large-scale BCSD. To
overcome the aforementioned problems, we propose GraphMoco: a graph momentum
contrast model that leverages multimodal structural information for efficient
binary function representation learning on a large scale. Our approach employs
a CNN-based model and departs from the usage of memory-intensive pre-trained
models. We adopt an unsupervised learning strategy that effectively use the
intrinsic structural information present in the binary code. Our approach
eliminates the need for manual labeling of similar or dissimilar
information.Importantly, GraphMoco demonstrates exceptional performance in
terms of both efficiency and accuracy when operating on extensive datasets. Our
experimental results indicate that our method surpasses the current SOTA
approaches in terms of accuracy.Comment: 22 pages,7 figure
Revisiting Binary Code Similarity Analysis using Interpretable Feature Engineering and Lessons Learned
Binary code similarity analysis (BCSA) is widely used for diverse security
applications such as plagiarism detection, software license violation
detection, and vulnerability discovery. Despite the surging research interest
in BCSA, it is significantly challenging to perform new research in this field
for several reasons. First, most existing approaches focus only on the end
results, namely, increasing the success rate of BCSA, by adopting
uninterpretable machine learning. Moreover, they utilize their own benchmark
sharing neither the source code nor the entire dataset. Finally, researchers
often use different terminologies or even use the same technique without citing
the previous literature properly, which makes it difficult to reproduce or
extend previous work. To address these problems, we take a step back from the
mainstream and contemplate fundamental research questions for BCSA. Why does a
certain technique or a feature show better results than the others?
Specifically, we conduct the first systematic study on the basic features used
in BCSA by leveraging interpretable feature engineering on a large-scale
benchmark. Our study reveals various useful insights on BCSA. For example, we
show that a simple interpretable model with a few basic features can achieve a
comparable result to that of recent deep learning-based approaches.
Furthermore, we show that the way we compile binaries or the correctness of
underlying binary analysis tools can significantly affect the performance of
BCSA. Lastly, we make all our source code and benchmark public and suggest
future directions in this field to help further research.Comment: 22 pages, under revision to Transactions on Software Engineering
(July 2021