2 research outputs found
AI Chain on Large Language Model for Unsupervised Control Flow Graph Generation for Statically-Typed Partial Code
Control Flow Graphs (CFGs) are essential for visualizing, understanding and
analyzing program behavior. For statically-typed programming language like
Java, developers obtain CFGs by using bytecode-based methods for compilable
code and Abstract Syntax Tree (AST)-based methods for partially uncompilable
code. However, explicit syntax errors during AST construction and implicit
semantic errors caused by bad coding practices can lead to behavioral loss and
deviation of CFGs.To address the issue, we propose a novel approach that
leverages the error-tolerant and understanding ability of pre-trained Large
Language Models (LLMs) to generate CFGs. Our approach involves a Chain of
Thought (CoT) with four steps: structure hierarchy extraction, nested code
block extraction, CFG generation of nested code blocks, and fusion of all
nested code blocks' CFGs. To address the limitations of the original CoT's
single-prompt approach (i.e., completing all steps in a single generative
pass), which can result in an ``epic'' prompt with hard-to-control behavior and
error accumulation, we break down the CoT into an AI chain with explicit
sub-steps. Each sub-step corresponds to a separate AI-unit, with an effective
prompt assigned to each unit for interacting with LLMs to accomplish a specific
purpose.Our experiments confirmed that our method outperforms existing CFG
tools in terms of node and edge coverage, especially for incomplete or
erroneous code. We also conducted an ablation experiment and confirmed the
effectiveness of AI chain design principles: Hierarchical Task Breakdown, Unit
Composition, and Mix of AI Units and Non-AI Units.Our work opens up new
possibilities for building foundational software engineering tools based on
LLMs, as opposed to traditional program analysis methods