8 research outputs found
IST Austria Thesis
In this thesis we present a computer-aided programming approach to concurrency. Our approach helps the programmer by automatically fixing concurrency-related bugs, i.e. bugs that occur when the program is executed using an aggressive preemptive scheduler, but not when using a non-preemptive (cooperative) scheduler. Bugs are program behaviours that are incorrect w.r.t. a specification. We consider both user-provided explicit specifications in the form of assertion
statements in the code as well as an implicit specification. The implicit specification is inferred from the non-preemptive behaviour. Let us consider sequences of calls that the program makes to an external interface. The implicit specification requires that any such sequence produced under a preemptive scheduler should be included in the set of sequences produced under a non-preemptive scheduler. We consider several semantics-preserving fixes that go beyond atomic sections typically explored in the synchronisation synthesis literature. Our synthesis is able to place locks, barriers and wait-signal statements and last, but not least reorder independent statements. The latter may be useful if a thread is released to early, e.g., before some initialisation is completed. We guarantee that our synthesis does not introduce deadlocks and that the synchronisation inserted is optimal w.r.t. a given objective function. We dub our solution trace-based synchronisation synthesis and it is loosely based on counterexample-guided inductive synthesis (CEGIS). The synthesis works by discovering a trace that is incorrect w.r.t. the specification and identifying ordering constraints crucial to trigger the specification violation. Synchronisation may be placed immediately (greedy approach) or delayed until all incorrect traces are found (non-greedy approach). For the non-greedy approach we construct a set of global constraints over synchronisation placements. Each model of the global constraints set corresponds to a correctness-ensuring synchronisation placement. The placement that is optimal w.r.t. the given objective function is chosen as the synchronisation solution. We evaluate our approach on a number of realistic (albeit simplified) Linux device-driver
benchmarks. The benchmarks are versions of the drivers with known concurrency-related bugs. For the experiments with an explicit specification we added assertions that would detect the bugs in the experiments. Device drivers lend themselves to implicit specification, where the device and the operating system are the external interfaces. Our experiments demonstrate that our synthesis method is precise and efficient. We implemented objective functions for coarse-grained and fine-grained locking and observed that different synchronisation placements are produced for our experiments, favouring e.g. a minimal number of synchronisation operations or maximum concurrency
Verification and Enforcement of Safe Schedules for Concurrent Programs
Automated software verification can prove the correctness of a
program with respect to a given specification and may be a valuable
support in the difficult task of ensuring the quality of large
software systems. However, the automated verification of concurrent
software can be particularly challenging due to the vast complexity
that non-deterministic scheduling causes.
This thesis is concerned with techniques that reduce the complexity
of concurrent programs in order to ease the verification task. We
approach this problem from two orthogonal directions: state space
reduction and reduction of non-determinism in executions of
concurrent programs.
Following the former direction, we present an algorithm for dynamic
partial-order reduction, a state space reduction technique that
avoids the verification of redundant executions. Our algorithm,
EPOR, eagerly creates schedules for program fragments. In
comparison to other dynamic partial-order reduction algorithms, it
avoids redundant race and dependency checks. Our experiments show
that EPOR runs considerably faster than a state-of-the-art
algorithm, which allows in several cases to analyze programs with a
higher number of threads within a given timeout.
In the latter direction, we present a formal framework for using
incomplete verification results to extract safe schedulers. As
incomplete verification results do not need to proof the correctness
of all possible executions of a program, their complexity can be
significantly lower than complete verification results. Hence, they
can be faster obtained. We constrain the scheduling of programs but
not their inputs in order to preserve their full functionality. In
our framework, executions under the scheduling constraints of an
incomplete verification result are safe, deadlock-free, and fair. We
instantiate our framework with the Impact model checking algorithm
and find in our evaluation that it can be used to model check
programs that are intractable for monolithic model checkers,
synthesize synchronization via assume statements, and
guarantee fair executions.
In order to safely execute a program within the set of executions
covered by an incomplete verification, scheduling needs to be
constrained. We discuss how to extract and encode schedules from
incomplete verification results, for both finite and infinite
executions, and how to efficiently enforce scheduling constraints,
both in terms of reducing the time to look up permission of
executing the next event and executing independent events
concurrently (by applying partial-order reduction).
A drawback of enforcing scheduling constraints is a potential
overhead in the execution time. However, in several cases,
constrained executions turned out to be even faster than
unconstrained executions. Our experimental results show that
iteratively relaxing a schedule can significantly reduce this
overhead. Hence, it is possible to adjust the incurred execution
time overhead in order to find a sweet spot with respect to the
amount of effort for creating schedules (i.e., the duration of
verification). Interestingly, we found cases in which a much earlier
reduction of execution time overhead is obtained by choosing
favorable scheduling constraints, which suggests that execution time
performance does not simply rely on the number of scheduling
constraints but to a large extend also on their structure