Towards General Loop Invariant Generation via Coordinating Symbolic Execution and Large Language Models

Abstract

Loop invariants, essential for program verification, are challenging to auto-generate especially for programs incorporating complex memory manipulations. Existing approaches for generating loop invariants rely on fixed sets or templates, hampering adaptability to real-world programs. Recent efforts have explored machine learning for loop invariant generation, but the lack of labeled data and the need for efficient generation are still troublesome. We consider the advent of the large language model (LLM) presents a promising solution, which can analyze the separation logic assertions after symbolic execution to infer loop invariants. To overcome the data scarcity issue, we propose a self-supervised learning paradigm to fine-tune LLM, using the split-and-reassembly of predicates to create an auxiliary task and generate rich synthetic data for offline training. Meanwhile, the proposed interactive system between LLM and traditional verification tools provides an efficient online querying process for unseen programs. Our framework can readily extend to new data structures or multi-loop programs since our framework only needs the definitions of different separation logic predicates, aiming to bridge the gap between existing capabilities and requirements of loop invariant generation in practical scenarios. Experiments across diverse memory-manipulated programs have demonstrated the performance of our proposed method compared to the baselines with respect to efficiency and effectiveness.Comment: Preprint, under revie

    Similar works

    Full text

    thumbnail-image

    Available Versions