Publications

Guiding Likely Invariant Synthesis on Distributed Systems with Large Language Models

Abstract

Deductive verification is an approach where the user expends effort in constructing a program invariant manually, or learns this invariant from program traces, and subsequently uses the program invariant for safety verification tasks. Most existing invariant synthesis tools still rely on model checking methods to certify invariants, use enumerative search to identify and refine candidate invariants, and user insights to start with a set of candidate predicates that may appear in the final (learned) invariant. On the other hand, pre-trained large language models (LLMs) have recently attracted considerable attention in various tasks related to generating program code (eg, from natural language). There have been recent efforts at applying LLMs to the program verification context. In this paper, we investigate the capabilities of LLMs to synthesize program invariants by creating a specialized set of prompts. When LLM invariant synthesis with direct prompting fails, we introduce two revision frameworks for incorporating LLM calls. In the first framework PSyn, the LLM suggests atomic predicates based on counterexamples for reconstructing the invariants. The second framework ISyn adopts the LLM to generate invariants by itself within a revision process. We tested our methods on 8 distributed systems modeled in Promela and invariants are verified with Spin. Our results show that the state-of-the-art model GPT-o3, when it is used to generate invariants directly, is less effective than the symbolic invariant synthesis method RunVS. However, our integrated approach PSyn, which employs the LLM as a predicate prompter, significantly outperforms GPT-o3 and ISyn …

Date
February 5, 2026
Authors
Yuan Xia, Aabha Shailesh Pingle, Deepayan Sur, Srivatsan Ravi, Mukund Raghothaman, Jyotirmoy V Deshmukh
Journal
CONFERENCE ON FORMAL METHODS IN COMPUTER-AIDED DESIGN–FMCAD 2025
Pages
226