16 research outputs found
How Hard is Weak-Memory Testing?
Weak-memory models are standard formal specifications of concurrency across
hardware, programming languages, and distributed systems. A fundamental
computational problem is consistency testing: is the observed execution of a
concurrent program in alignment with the specification of the underlying
system? The problem has been studied extensively across Sequential Consistency
(SC) and weak memory, and proven to be NP-complete when some aspect of the
input (e.g., number of threads/memory locations) is unbounded. This
unboundedness has left a natural question open: are there efficient
parameterized algorithms for testing?
The main contribution of this paper is a deep hardness result for consistency
testing under many popular weak-memory models: the problem remains NP-complete
even in its bounded setting, where candidate executions contain a bounded
number of threads, memory locations, and values. This hardness spreads across
several Release-Acquire variants of C11, a popular variant of its Relaxed
fragment, popular Causal Consistency models, and the POWER architecture. To our
knowledge, this is the first result that fully exposes the hardness of
weak-memory testing and proves that the problem admits no parameterization
under standard input parameters. It also yields a computational separation of
these models from SC, x86-TSO, PSO, and Relaxed, for which bounded consistency
testing is either known (for SC), or shown here (for the rest), to be in
polynomial time
Overcoming Memory Weakness with Unified Fairness
We consider the verification of liveness properties for concurrent programs
running on weak memory models. To that end, we identify notions of fairness
that preclude demonic non-determinism, are motivated by practical observations,
and are amenable to algorithmic techniques. We provide both logical and
stochastic definitions of our fairness notions and prove that they are
equivalent in the context of liveness verification. In particular, we show that
our fairness allows us to reduce the liveness problem (repeated control state
reachability) to the problem of simple control state reachability. We show that
this is a general phenomenon by developing a uniform framework which serves as
the formal foundation of our fairness definition and can be instantiated to a
wide landscape of memory models. These models include SC, TSO, PSO,
(Strong/Weak) Release-Acquire, Strong Coherence, FIFO-consistency, and RMO.Comment: 32 pages. To appear in Proc. 35th International Conference on
Computer Aided Verification (CAV) 202
The Decidability of Verification under Promising 2.0
In PLDI'20, Lee et al. introduced the \emph{promising } semantics PS 2.0 of
the C++ concurrency that captures most of the common program transformations
while satisfying the DRF guarantee. The reachability problem for finite-state
programs under PS 2.0 with only release-acquire accesses is already known to be
undecidable. Therefore, we address, in this paper, the reachability problem for
programs running under PS 2.0 with relaxed accesses together with promises. We
show that this problem is undecidable even in the case where the input program
has finite state. Given this undecidability result, we consider the fragment of
PS 2.0 with only relaxed accesses allowing bounded number of promises. We show
that under this restriction, the reachability is decidable, albeit very
expensive: it is non-primitive recursive. Given this high complexity with
bounded number of promises and the undecidability result for the RA fragment of
PS 2.0, we consider a bounded version of the reachability problem. To this end,
we bound both the number of promises and the "view-switches", i.e, the number
of times the processes may switch their local views of the global memory. We
provide a code-to-code translation from an input program under PS 2.0, with
relaxed and release-acquire memory accesses along with promises, to a program
under SC. This leads to a reduction of the bounded reachability problem under
PS 2.0 to the bounded context-switching problem under SC. We have implemented a
prototype tool and tested it on a set of benchmarks, demonstrating that many
bugs in programs can be found using a small bound
Correct-by-Construction Reinforcement Learning of Cardiac Pacemakers from Duration Calculus Requirements
As the complexity of pacemaker devices continues to grow, the importance of capturing its functional correctness requirement formally cannot be overestimated. The pacemaker system specification document by \emph{Boston Scientific} provides a widely accepted set of specifications for pacemakers.
As these specifications are written in a natural language, they are not amenable for automated verification, synthesis, or reinforcement learning of pacemaker systems. This paper presents a formalization of these requirements for a dual-chamber pacemaker in \emph{duration calculus} (DC), a highly expressive real-time specification language.
The proposed formalization allows us to automatically translate pacemaker requirements into executable specifications as stopwatch automata, which can be used to enable simulation, monitoring, validation, verification and automatic synthesis of pacemaker systems.
The cyclic nature of the pacemaker-heart closed-loop system results in DC requirements that compile to a decidable subclass of stopwatch automata. We present shield reinforcement learning (shield RL), a shield synthesis based reinforcement learning algorithm, by automatically constructing safety envelopes from DC specifications