312 research outputs found
Proving Differential Privacy with Shadow Execution
Recent work on formal verification of differential privacy shows a trend
toward usability and expressiveness -- generating a correctness proof of
sophisticated algorithm while minimizing the annotation burden on programmers.
Sometimes, combining those two requires substantial changes to program logics:
one recent paper is able to verify Report Noisy Max automatically, but it
involves a complex verification system using customized program logics and
verifiers.
In this paper, we propose a new proof technique, called shadow execution, and
embed it into a language called ShadowDP. ShadowDP uses shadow execution to
generate proofs of differential privacy with very few programmer annotations
and without relying on customized logics and verifiers. In addition to
verifying Report Noisy Max, we show that it can verify a new variant of Sparse
Vector that reports the gap between some noisy query answers and the noisy
threshold. Moreover, ShadowDP reduces the complexity of verification: for all
of the algorithms we have evaluated, type checking and verification in total
takes at most 3 seconds, while prior work takes minutes on the same algorithms.Comment: 23 pages, 12 figures, PLDI'1
VST-A: A Foundationally Sound Annotation Verifier
An interactive program verification tool usually requires users to write
formal proofs in a theorem prover like Coq and Isabelle, which is an obstacle
for most software engineers. In comparison, annotation verifiers can use
assertions in source files as hints for program verification but they
themselves do not have a formal soundness proof.
In this paper, we demonstrate VST-A, a foundationally sound annotation
verifier for sequential C programs. On one hand, users can write high order
assertion in C programs' comments. On the other hand, separation logic proofs
will be generated in the backend whose proof rules are formally proved sound
w.r.t. CompCert's Clight semantics. Residue proof goals in Coq may be generated
if some assertion entailments cannot be verified automatically
MoPS: A Modular Protection Scheme for Long-Term Storage
Current trends in technology, such as cloud computing, allow outsourcing the
storage, backup, and archiving of data. This provides efficiency and
flexibility, but also poses new risks for data security. It in particular
became crucial to develop protection schemes that ensure security even in the
long-term, i.e. beyond the lifetime of keys, certificates, and cryptographic
primitives. However, all current solutions fail to provide optimal performance
for different application scenarios. Thus, in this work, we present MoPS, a
modular protection scheme to ensure authenticity and integrity for data stored
over long periods of time. MoPS does not come with any requirements regarding
the storage architecture and can therefore be used together with existing
archiving or storage systems. It supports a set of techniques which can be
plugged together, combined, and migrated in order to create customized
solutions that fulfill the requirements of different application scenarios in
the best possible way. As a proof of concept we implemented MoPS and provide
performance measurements. Furthermore, our implementation provides additional
features, such as guidance for non-expert users and export functionalities for
external verifiers.Comment: Original Publication (in the same form): ASIACCS 201
math-PVS: A Large Language Model Framework to Map Scientific Publications to PVS Theories
As artificial intelligence (AI) gains greater adoption in a wide variety of
applications, it has immense potential to contribute to mathematical discovery,
by guiding conjecture generation, constructing counterexamples, assisting in
formalizing mathematics, and discovering connections between different
mathematical areas, to name a few.
While prior work has leveraged computers for exhaustive mathematical proof
search, recent efforts based on large language models (LLMs) aspire to position
computing platforms as co-contributors in the mathematical research process.
Despite their current limitations in logic and mathematical tasks, there is
growing interest in melding theorem proving systems with foundation models.
This work investigates the applicability of LLMs in formalizing advanced
mathematical concepts and proposes a framework that can critically review and
check mathematical reasoning in research papers. Given the noted reasoning
shortcomings of LLMs, our approach synergizes the capabilities of proof
assistants, specifically PVS, with LLMs, enabling a bridge between textual
descriptions in academic papers and formal specifications in PVS. By harnessing
the PVS environment, coupled with data ingestion and conversion mechanisms, we
envision an automated process, called \emph{math-PVS}, to extract and formalize
mathematical theorems from research papers, offering an innovative tool for
academic review and discovery
An Automated Analyzer for Financial Security of Ethereum Smart Contracts
At present, millions of Ethereum smart contracts are created per year and
attract financially motivated attackers. However, existing analyzers do not
meet the need to precisely analyze the financial security of large numbers of
contracts. In this paper, we propose and implement FASVERIF, an automated
analyzer for fine-grained analysis of smart contracts' financial security. On
the one hand, FASVERIF automatically generates models to be verified against
security properties of smart contracts. On the other hand, our analyzer
automatically generates the security properties, which is different from
existing formal verifiers for smart contracts. As a result, FASVERIF can
automatically process source code of smart contracts, and uses formal methods
whenever possible to simultaneously maximize its accuracy.
We evaluate FASVERIF on a vulnerabilities dataset by comparing it with other
automatic tools. Our evaluation shows that FASVERIF greatly outperforms the
representative tools using different technologies, with respect to accuracy and
coverage of types of vulnerabilities
Challenges and Directions in Formalizing the Semantics of Modeling Languages
Developing software from models is a growing practice and there exist many model-based tools (e.g., editors, interpreters, debuggers, and simulators) for supporting model-driven engineering. Even though these tools facilitate the automation of software engineering tasks and activities, such tools are typically engineered manually. However, many of these tools have a common semantic foundation centered around an underlying modeling language, which would make it possible to automate their development if the modeling language specification were formalized. Even though there has been much work in formalizing programming languages, with many successful tools constructed using such formalisms, there has been little work in formalizing modeling languages for the purpose of automation. This paper discusses possible semantics-based approaches for the formalization of modeling languages and describes how this formalism may be used to automate the construction of modeling tools
- …