37,235 research outputs found
Performance comparison between Java and JNI for optimal implementation of computational micro-kernels
General purpose CPUs used in high performance computing (HPC) support a
vector instruction set and an out-of-order engine dedicated to increase the
instruction level parallelism. Hence, related optimizations are currently
critical to improve the performance of applications requiring numerical
computation. Moreover, the use of a Java run-time environment such as the
HotSpot Java Virtual Machine (JVM) in high performance computing is a promising
alternative. It benefits from its programming flexibility, productivity and the
performance is ensured by the Just-In-Time (JIT) compiler. Though, the JIT
compiler suffers from two main drawbacks. First, the JIT is a black box for
developers. We have no control over the generated code nor any feedback from
its optimization phases like vectorization. Secondly, the time constraint
narrows down the degree of optimization compared to static compilers like GCC
or LLVM. So, it is compelling to use statically compiled code since it benefits
from additional optimization reducing performance bottlenecks. Java enables to
call native code from dynamic libraries through the Java Native Interface
(JNI). Nevertheless, JNI methods are not inlined and require an additional cost
to be invoked compared to Java ones. Therefore, to benefit from better static
optimization, this call overhead must be leveraged by the amount of computation
performed at each JNI invocation. In this paper we tackle this problem and we
propose to do this analysis for a set of micro-kernels. Our goal is to select
the most efficient implementation considering the amount of computation defined
by the calling context. We also investigate the impact on performance of
several different optimization schemes which are vectorization, out-of-order
optimization, data alignment, method inlining and the use of native memory for
JNI methods.Comment: Part of ADAPT Workshop proceedings, 2015 (arXiv:1412.2347
SAFE: Self-Attentive Function Embeddings for Binary Similarity
The binary similarity problem consists in determining if two functions are
similar by only considering their compiled form. Advanced techniques for binary
similarity recently gained momentum as they can be applied in several fields,
such as copyright disputes, malware analysis, vulnerability detection, etc.,
and thus have an immediate practical impact. Current solutions compare
functions by first transforming their binary code in multi-dimensional vector
representations (embeddings), and then comparing vectors through simple and
efficient geometric operations. However, embeddings are usually derived from
binary code using manual feature extraction, that may fail in considering
important function characteristics, or may consider features that are not
important for the binary similarity problem. In this paper we propose SAFE, a
novel architecture for the embedding of functions based on a self-attentive
neural network. SAFE works directly on disassembled binary functions, does not
require manual feature extraction, is computationally more efficient than
existing solutions (i.e., it does not incur in the computational overhead of
building or manipulating control flow graphs), and is more general as it works
on stripped binaries and on multiple architectures. We report the results from
a quantitative and qualitative analysis that show how SAFE provides a
noticeable performance improvement with respect to previous solutions.
Furthermore, we show how clusters of our embedding vectors are closely related
to the semantic of the implemented algorithms, paving the way for further
interesting applications (e.g. semantic-based binary function search).Comment: Published in International Conference on Detection of Intrusions and
Malware, and Vulnerability Assessment (DIMVA) 201
- …