38 research outputs found

    On Using Blockchains for Safety-Critical Systems

    Full text link
    Innovation in the world of today is mainly driven by software. Companies need to continuously rejuvenate their product portfolios with new features to stay ahead of their competitors. For example, recent trends explore the application of blockchains to domains other than finance. This paper analyzes the state-of-the-art for safety-critical systems as found in modern vehicles like self-driving cars, smart energy systems, and home automation focusing on specific challenges where key ideas behind blockchains might be applicable. Next, potential benefits unlocked by applying such ideas are presented and discussed for the respective usage scenario. Finally, a research agenda is outlined to summarize remaining challenges for successfully applying blockchains to safety-critical cyber-physical systems

    Is Refactoring Always a Good Egg? Exploring the Interconnection Between Bugs and Refactorings

    Get PDF

    Context2Name: A Deep Learning-Based Approach to Infer Natural Variable Names from Usage Contexts

    Full text link
    Most of the JavaScript code deployed in the wild has been minified, a process in which identifier names are replaced with short, arbitrary and meaningless names. Minified code occupies less space, but also makes the code extremely difficult to manually inspect and understand. This paper presents Context2Name, a deep learningbased technique that partially reverses the effect of minification by predicting natural identifier names for minified names. The core idea is to predict from the usage context of a variable a name that captures the meaning of the variable. The approach combines a lightweight, token-based static analysis with an auto-encoder neural network that summarizes usage contexts and a recurrent neural network that predict natural names for a given usage context. We evaluate Context2Name with a large corpus of real-world JavaScript code and show that it successfully predicts 47.5% of all minified identifiers while taking only 2.9 milliseconds on average to predict a name. A comparison with the state-of-the-art tools JSNice and JSNaughty shows that our approach performs comparably in terms of accuracy while improving in terms of efficiency. Moreover, Context2Name complements the state-of-the-art by predicting 5.3% additional identifiers that are missed by both existing tools

    UniASM: Binary Code Similarity Detection without Fine-tuning

    Full text link
    Binary code similarity detection (BCSD) is widely used in various binary analysis tasks such as vulnerability search, malware detection, clone detection, and patch analysis. Recent studies have shown that the learning-based binary code embedding models perform better than the traditional feature-based approaches. In this paper, we proposed a novel transformer-based binary code embedding model, named UniASM, to learn representations of the binary functions. We designed two new training tasks to make the spatial distribution of the generated vectors more uniform, which can be used directly in BCSD without any fine-tuning. In addition, we proposed a new tokenization approach for binary functions, increasing the token's semantic information while mitigating the out-of-vocabulary (OOV) problem. The experimental results show that UniASM outperforms state-of-the-art (SOTA) approaches on the evaluation dataset. We achieved the average scores of recall@1 on cross-compilers, cross-optimization-levels and cross-obfuscations are 0.72, 0.63, and 0.77, which is higher than existing SOTA baselines. In a real-world task of known vulnerability searching, UniASM outperforms all the current baselines.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl
    corecore