718 research outputs found
Towards A Practical High-Assurance Systems Programming Language
Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation.
Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code.
To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process
Formalizing Chemical Physics using the Lean Theorem Prover
Chemical theory can be made more rigorous using the Lean theorem prover, an
interactive theorem prover for complex mathematics. We formalize the Langmuir
and BET theories of adsorption, making each scientific premise clear and every
step of the derivations explicit. Lean's math library, mathlib, provides
formally verified theorems for infinite geometries series, which are central to
BET theory. While writing these proofs, Lean prompts us to include mathematical
constraints that were not originally reported. We also illustrate how Lean
flexibly enables the reuse of proofs that build on more complex theories
through the use of functions, definitions, and structures. Finally, we
construct scientific frameworks for interoperable proofs, by creating
structures for classical thermodynamics and kinematics, using them to formalize
gas law relationships like Boyle's Law and equations of motion underlying
Newtonian mechanics, respectively. This approach can be extended to other
fields, enabling the formalization of rich and complex theories in science and
engineering
Recommended from our members
Automating the Formal Verification of Software
Formally verified correctness is one of the most desirable properties of software systems. Despite great progress made toward verification via interactive proof assistants, such as Coq and Isabelle/HOL, such verification remains one of the most effort-intensive (and often prohibitively difficult) software development activities. Recent work has created tools that automatically synthesize proofs either through reasoning using precomputed facts or using machine learning to model proofs and then perform biased search through the proof space. However, models in existing tools fail to capture the richness present in proofs, such as the information the programmer has access to when writing proofs and the natural language contained within variable names. Furthermore, these prior models do not make use of variations in the learning process and advances in large language models.
In this dissertation, I develop tools to improve proof synthesis and to enable fully automating more verification. I first present TacTok, a proof-synthesis tool that models proofs using both the partial proof written thus far and the semantics of the proof state. I then present Diva, a proof-synthesis tool that controls the learning process to produce a diverse set of models and, due to the unique nature of proof synthesis (the existence of the theorem prover, an oracle that infallibly judges a proof’s correctness), efficiently combines these models to improve the overall proving power. I then present Passport, a proof-synthesis tool that systematically explores different ways of encoding identifiers in proofs to improve synthesis. Finally, I present Baldur, a proof-synthesis tool that uses transformer-based pretrained large language models fine-tuned on proofs to generate and repair whole proofs at once, rather than one step at a time.
This dissertation contributes new ideas for improving automated proof synthesis and empirically demonstrates that the improvement is significant on large benchmarks consisting of open-source software projects
Stochastic Transport in Upper Ocean Dynamics
This open access proceedings volume brings selected, peer-reviewed contributions presented at the Stochastic Transport in Upper Ocean Dynamics (STUOD) 2021 Workshop, held virtually and in person at the Imperial College London, UK, September 20–23, 2021. The STUOD project is supported by an ERC Synergy Grant, and led by Imperial College London, the National Institute for Research in Computer Science and Automatic Control (INRIA) and the French Research Institute for Exploitation of the Sea (IFREMER). The project aims to deliver new capabilities for assessing variability and uncertainty in upper ocean dynamics. It will provide decision makers a means of quantifying the effects of local patterns of sea level rise, heat uptake, carbon storage and change of oxygen content and pH in the ocean. Its multimodal monitoring will enhance the scientific understanding of marine debris transport, tracking of oil spills and accumulation of plastic in the sea. All topics of these proceedings are essential to the scientific foundations of oceanography which has a vital role in climate science. Studies convened in this volume focus on a range of fundamental areas, including: Observations at a high resolution of upper ocean properties such as temperature, salinity, topography, wind, waves and velocity; Large scale numerical simulations; Data-based stochastic equations for upper ocean dynamics that quantify simulation error; Stochastic data assimilation to reduce uncertainty. These fundamental subjects in modern science and technology are urgently required in order to meet the challenges of climate change faced today by human society. This proceedings volume represents a lasting legacy of crucial scientific expertise to help meet this ongoing challenge, for the benefit of academics and professionals in pure and applied mathematics, computational science, data analysis, data assimilation and oceanography
LIPIcs, Volume 277, GIScience 2023, Complete Volume
LIPIcs, Volume 277, GIScience 2023, Complete Volum
ATHENA Research Book, Volume 2
ATHENA European University is an association of nine higher education institutions with the mission of promoting excellence in research and innovation by enabling international cooperation. The acronym ATHENA stands for Association of Advanced Technologies in Higher Education. Partner institutions are from France, Germany, Greece, Italy, Lithuania, Portugal and Slovenia: University of Orléans, University of Siegen, Hellenic Mediterranean University, Niccolò Cusano University, Vilnius Gediminas Technical University, Polytechnic Institute of Porto and University of Maribor. In 2022, two institutions joined the alliance: the Maria Curie-Skłodowska University from Poland and the University of Vigo from Spain. Also in 2022, an institution from Austria joined the alliance as an associate member: Carinthia University of Applied Sciences. This research book presents a selection of the research activities of ATHENA University's partners. It contains an overview of the research activities of individual members, a selection of the most important bibliographic works of members, peer-reviewed student theses, a descriptive list of ATHENA lectures and reports from individual working sections of the ATHENA project. The ATHENA Research Book provides a platform that encourages collaborative and interdisciplinary research projects by advanced and early career researchers
12th International Conference on Geographic Information Science: GIScience 2023, September 12–15, 2023, Leeds, UK
No abstract available
Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC
The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this
final state has yielded some of the most precise measurements of the particle. As
measurements of the Higgs boson become increasingly precise, greater import is
placed on the factors that constitute the uncertainty. Reducing the effects of these
uncertainties requires an understanding of their causes. The research presented
in this thesis aims to illuminate how uncertainties on simulation modelling are
determined and proffers novel techniques in deriving them.
The upgrade of the FastCaloSim tool is described, used for simulating events in
the ATLAS calorimeter at a rate far exceeding the nominal detector simulation,
Geant4. The integration of a method that allows the toolbox to emulate the
accordion geometry of the liquid argon calorimeters is detailed. This tool allows
for the production of larger samples while using significantly fewer computing
resources.
A measurement of the total Higgs boson production cross-section multiplied
by the diphoton branching ratio (σ × Bγγ) is presented, where this value was
determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement
with the Standard Model prediction. The signal and background shape modelling
is described, and the contribution of the background modelling uncertainty to the
total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production
mechanism.
A method for estimating the number of events in a Monte Carlo background
sample required to model the shape is detailed. It was found that the size of
the nominal γγ background events sample required a multiplicative increase by
a factor of 3.60 to adequately model the background with a confidence level of
68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate,
0.5 billion additional simulated events were produced, substantially reducing the
background modelling uncertainty.
A technique is detailed for emulating the effects of Monte Carlo event generator
differences using multivariate reweighting. The technique is used to estimate the
event generator uncertainty on the signal modelling of tHqb events, improving the
reliability of estimating the tHqb production cross-section. Then this multivariate
reweighting technique is used to estimate the generator modelling uncertainties
on background V γγ samples for the first time. The estimated uncertainties were
found to be covered by the currently assumed background modelling uncertainty
- …