405 research outputs found
Preventing Atomicity Violations with Contracts
Software developers are expected to protect concurrent accesses to shared
regions of memory with some mutual exclusion primitive that ensures atomicity
properties to a sequence of program statements. This approach prevents data
races but may fail to provide all necessary correctness properties.The
composition of correlated atomic operations without further synchronization may
cause atomicity violations. Atomic violations may be avoided by grouping the
correlated atomic regions in a single larger atomic scope. Concurrent programs
are particularly prone to atomicity violations when they use services provided
by third party packages or modules, since the programmer may fail to identify
which services are correlated. In this paper we propose to use contracts for
concurrency, where the developer of a module writes a set of contract terms
that specify which methods are correlated and must be executed in the same
atomic scope. These contracts are then used to verify the correctness of the
main program with respect to the usage of the module(s). If a contract is well
defined and complete, and the main program respects it, then the program is
safe from atomicity violations with respect to that module. We also propose a
static analysis based methodology to verify contracts for concurrency that we
applied to some real-world software packages. The bug we found in Tomcat 6.0
was immediately acknowledged and corrected by its development team
Maintaining the correctness of transactional memory programs
Dissertação para obtenção do Grau de Doutor em
Engenharia InformáticaThis dissertation addresses the challenge of maintaining the correctness of transactional memory programs, while improving its parallelism with small transactions and relaxed isolation levels.
The efficiency of the transactional memory systems depends directly on the level of parallelism, which in turn depends on the conflict rate. A high conflict rate between memory transactions can be addressed by reducing the scope of transactions, but this approach may turn the application prone to the occurrence of atomicity violations. Another way to address this issue is to ignore some of the conflicts by using a relaxed isolation level, such as snapshot isolation, at the cost of introducing write-skews serialization anomalies that break the consistency guarantees provided by a stronger consistency property, such as opacity.
In order to tackle the correctness issues raised by the atomicity violations and the write-skew anomalies, we propose two static analysis techniques: one based in a novel static analysis algorithm that works on a dependency graph of program variables and detects atomicity violations;
and a second one based in a shape analysis technique supported by separation logic augmented with heap path expressions, a novel representation based on sequences of heap dereferences that certifies if a transactional memory program executing under snapshot isolation is free from writeskew
anomalies.
The evaluation of the runtime execution of a transactional memory algorithm using snapshot
isolation requires a framework that allows an efficient implementation of a multi-version algorithm and, at the same time, enables its comparison with other existing transactional memory algorithms. In the Java programming language there was no framework satisfying both these requirements. Hence, we extended an existing software transactional memory framework that already supported efficient implementations of some transactional memory algorithms, to also
support the efficient implementation of multi-version algorithms. The key insight for this extension is the support for storing the transactional metadata adjacent to memory locations. We illustrate the benefits of our approach by analyzing its impact with both single- and multi-version transactional memory algorithms using several transactional workloads.Fundação para a Ciência e Tecnologia - PhD research grant SFRH/BD/41765/2007, and in
the research projects Synergy-VM (PTDC/EIA-EIA/113613/2009), and RepComp (PTDC/EIAEIA/
108963/2008
Preventing atomicity violations with contracts
Concurrent programming is a difficult and error-prone task because the programmer
must reason about multiple threads of execution and their possible interleavings. A concurrent program must synchronize the concurrent accesses to shared memory regions,
but this is not enough to prevent all anomalies that can arise in a concurrent setting. The programmer can misidentify the scope of the regions of code that need to be atomic, resulting in atomicity violations and failing to ensure the correct behavior of the program.
Executing a sequence of atomic operations may lead to incorrect results when these operations are co-related. In this case, the programmer may be required to enforce the
sequential execution of those operations as a whole to avoid atomicity violations. This
situation is specially common when the developer makes use of services from third-party packages or modules.
This thesis proposes a methodology, based on the design by contract methodology,
to specify which sequences of operations must be executed atomically. We developed an
analysis that statically verifies that a client of a module is respecting its contract, allowing the programmer to identify the source of possible atomicity violations.Fundação para a Ciência e Tecnologia - research project Synergy-VM(PTDC/EIA-EIA/113613/2009
OSCAR. A Noise Injection Framework for Testing Concurrent Software
“Moore’s Law” is a well-known observable phenomenon in computer science that describes a
visible yearly pattern in processor’s die increase. Even though it has held true for the last 57
years, thermal limitations on how much a processor’s core frequencies can be increased, have
led to physical limitations to their performance scaling. The industry has since then shifted
towards multicore architectures, which offer much better and scalable performance, while in
turn forcing programmers to adopt the concurrent programming paradigm when designing new
software, if they wish to make use of this added performance. The use of this paradigm comes
with the unfortunate downside of the sudden appearance of a plethora of additional errors in
their programs, stemming directly from their (poor) use of concurrency techniques.
Furthermore, these concurrent programs themselves are notoriously hard to design and to
verify their correctness, with researchers continuously developing new, more effective and effi-
cient methods of doing so. Noise injection, the theme of this dissertation, is one such method. It
relies on the “probe effect” — the observable shift in the behaviour of concurrent programs upon
the introduction of noise into their routines. The abandonment of ConTest, a popular proprietary
and closed-source noise injection framework, for testing concurrent software written using the
Java programming language, has left a void in the availability of noise injection frameworks for
this programming language.
To mitigate this void, this dissertation proposes OSCAR — a novel open-source noise injection
framework for the Java programming language, relying on static bytecode instrumentation for
injecting noise. OSCAR will provide a free and well-documented noise injection tool for research,
pedagogical and industry usage. Additionally, we propose a novel taxonomy for categorizing new
and existing noise injection heuristics, together with a new method for generating and analysing
concurrent software traces, based on string comparison metrics.
After noising programs from the IBM Concurrent Benchmark with different heuristics, we
observed that OSCAR is highly effective in increasing the coverage of the interleaving space, and
that the different heuristics provide diverse trade-offs on the cost and benefit (time/coverage) of
the noise injection process.Resumo
A “Lei de Moore” é um fenómeno, bem conhecido na área das ciências da computação, que
descreve um padrão evidente no aumento anual da densidade de transístores num processador.
Mesmo mantendo-se válido nos últimos 57 anos, o aumento do desempenho dos processadores
continua garrotado pelas limitações térmicas inerentes `a subida da sua frequência de funciona-
mento. Desde então, a industria transitou para arquiteturas multi núcleo, com significativamente
melhor e mais escalável desempenho, mas obrigando os programadores a adotar o paradigma
de programação concorrente ao desenhar os seus novos programas, para poderem aproveitar o
desempenho adicional que advém do seu uso. O uso deste paradigma, no entanto, traz consigo,
por consequência, a introdução de uma panóplia de novos erros nos programas, decorrentes
diretamente da utilização (inadequada) de técnicas de programação concorrente.
Adicionalmente, estes programas concorrentes são conhecidos por serem consideravelmente
mais difíceis de desenhar e de validar, quanto ao seu correto funcionamento, incentivando investi-
gadores ao desenvolvimento de novos métodos mais eficientes e eficazes de o fazerem. A injeção
de ruído, o tema principal desta dissertação, é um destes métodos. Esta baseia-se no “efeito sonda”
(do inglês “probe effect”) — caracterizado por uma mudança de comportamento observável em
programas concorrentes, ao terem ruído introduzido nas suas rotinas. Com o abandono do Con-
Test, uma framework popular, proprietária e de código fechado, de análise dinâmica de programas
concorrentes através de injecção de ruído, escritos com recurso `a linguagem de programação Java,
viu-se surgir um vazio na oferta de framework de injeção de ruído, para esta mesma linguagem.
Para mitigar este vazio, esta dissertação propõe o OSCAR — uma nova framework de injeção de
ruído, de código-aberto, para a linguagem de programação Java, que utiliza manipulação estática
de bytecode para realizar a introdução de ruído. O OSCAR pretende oferecer uma ferramenta
livre e bem documentada de injeção de ruído para fins de investigação, pedagógicos ou até para
a indústria. Adicionalmente, a dissertação propõe uma nova taxonomia para categorizar os dife-
rentes tipos de heurísticas de injecção de ruídos novos e existentes, juntamente com um método
para gerar e analisar traces de programas concorrentes, com base em métricas de comparação de
strings.
Após inserir ruído em programas do IBM Concurrent Benchmark, com diversas heurísticas, ob-
servámos que o OSCAR consegue aumentar significativamente a dimensão da cobertura do espaço de estados de programas concorrentes. Adicionalmente, verificou-se que diferentes heurísticas
produzem um leque variado de prós e contras, especialmente em termos de eficácia versus
eficiência
Rigorous concurrency analysis of multithreaded programs
technical reportThis paper explores the practicality of conducting program analysis for multithreaded software using constraint solv- ing. By precisely defining the underlying memory consis- tency rules in addition to the intra-thread program seman- tics, our approach orders a unique advantage for program ver- ification | it provides an accurate and exhaustive coverage of all thread interleavings for any given memory model. We demonstrate how this can be achieved by formalizing sequen- tial consistency for a source language that supports control branches and a monitor-style mutual exclusion mechanism. We then discuss how to formulate programmer expectations as constraints and propose three concrete applications of this approach: execution validation, race detection, and atom- icity analysis. Finally, we describe the implementation of a formal analysis tool using constraint logic programming, with promising initial results for reasoning about small but non-trivial concurrent programs
Strong Memory Consistency For Parallel Programming
Correctly synchronizing multithreaded programs is challenging, and errors can lead to program failures (e.g., atomicity violations). Existing memory consistency models rule out some possible failures, but are limited by depending on subtle programmer-defined locking code and by providing unintuitive semantics for incorrectly synchronized code. Stronger memory consistency models assist programmers by providing them with easier-to-understand semantics with regard to memory access interleavings in parallel code. This dissertation proposes a new strong memory consistency model based on ordering-free regions (OFRs), which are spans of dynamic instructions between consecutive ordering constructs (e.g. barriers). Atomicity over ordering-free
regions provides stronger atomicity than existing strong memory consistency models with competitive performance. Ordering-free regions also simplify programmer reasoning by limiting the potential for atomicity violations to fewer points in the program’s execution. This dissertation explores both software-only and hardware-supported systems that provide OFR serializability
CONTEXT-AWARE DEBUGGING FOR CONCURRENT PROGRAMS
Concurrency faults are difficult to reproduce and localize because they usually occur under specific inputs and thread interleavings. Most existing fault localization techniques focus on sequential programs but fail to identify faulty memory access patterns across threads, which are usually the root causes of concurrency faults. Moreover, existing techniques for sequential programs cannot be adapted to identify faulty paths in concurrent programs. While concurrency fault localization techniques have been proposed to analyze passing and failing executions obtained from running a set of test cases to identify faulty access patterns, they primarily focus on using statistical analysis. We present a novel approach to fault localization using feature selection techniques from machine learning. Our insight is that the concurrency access patterns obtained from a large volume of coverage data generally constitute high dimensional data sets, yet existing statistical analysis techniques for fault localization are usually applied to low dimensional data sets. Each additional failing or passing run can provide more diverse information, which can help localize faulty concurrency access patterns in code. The patterns with maximum feature diversity information can point to the most suspicious pattern. We then apply data mining technique and identify the interleaving patterns that are occurred most frequently and provide the possible faulty paths. We also evaluate the effectiveness of fault localization using test suites generated from different test adequacy criteria. We have evaluated Cadeco on 10 real-world multi-threaded Java applications. Results indicate that Cadeco outperforms state-of-the-art approaches for localizing concurrency faults
- …