580 research outputs found
Recommended from our members
Using Formal Methods to Verify Transactional Abstract Concurrency Control
Concurrent application design and implementation is more important than ever in today\u27s multi-core processor world. Transactional Memory (TM) Concurrent application design and implementation is more important than ever in today\u27s multi-core processor world. Transactional Memory (TM). Each has its own particular advantages and disadvantages. However, these techniques each need some extra information to `glue\u27 the non-transactional operation into a transactional context. At the most general level, non-transactional code must be decorated in such a way that the TM run-time can determine how those non-transactional operations commute with one another, and how to `undo\u27 the non-transactional operations in case the run-time needs to abort a software transaction. The TM run-time trusts that these programmer-provided annotations are correct. Therefore, if an implementor needs to employ one of these transactional `escape hatches\u27, it is crucially important that their concurrency control annotations be correct. However, reasoning about the commutativity of data structure operations is often challenging, and increasing the burden on the programmer with a proof requirement does not simplify the task of concurrent programming. There is a way to leverage the structure that these TM extensions require to reduce greatly the burden on the programmer. If the programmer could describe the abstract state of the data structure and then reason about it with as much machine assistance as possible, then there would be much less opportunity for error. Abstract state is preferable to a more concrete state, because it permits the programmer to use different concrete implementations of the same abstract data type. Also, some TM extensions such as open nesting can handle concrete state conflicts without programmer intervention (making the abstract state the appropriate state for reasoning about commutativity). A solution to the problem of specifying and verifying the concurrency properties of abstract data structures is the subject of this thesis. We will describe a new language, ACCLAM, for describing the abstract state of a data structure and reasoning about its concurrency control properties. This thesis also describes a tool that can process ACCLAM descriptions into a machine verifiable form (they are converted to a SAT problem). We will also provides a more detailed overview of transactional memory and the more popular extensions, a detailed semantic description of ACCLAM and a set of example data structure models and the results of processing those examples with the language processing tool
効率良いテストケース生成による並行処理プログラムのデバッグとテスト
Debugging multi-threaded concurrent programs is more difficult than sequential programs because errors are not always reproducible. Re-executing or instrumenting a concurrent program for tracing might change the execution timing and might cause the concurrent program to take a different execution path. In other words, the exact timing that caused the error is unknown. In order to reproduce the error, one needs to execute the concurrent program with the same input values many times as test cases by changing interleavings, but it is not always feasible to test them all. This dissertation proposes a debugging/testing system that generates all possible executions as test cases based on the limited information obtained from an execution trace, and then detects potential race conditions caused by different schedules and interrupt timings on a concurrent multi-threaded program. There are a number of studies about test cases reduction using partial order reduction, but there are still redundancies for the purpose of checking race conditions. The objective is to efficiently reproduce concurrent errors, specifically race conditions, by proposing three methods. The first is to reduce the numbers of interleavings to be tested. This is achieved by reducing redundant test cases and eliminating infeasible ones. The originality of the proposed method is to exploit the nature of branch coverage and utilize data flows from the trace information to identify only those interleavings that affect branch outcomes, whereas existing methods try to identify all the interleavings which may affect shared variables. Since the execution paths with the same branch outcomes would have equivalent sequences of lock/unlock and read/write operations to shared variables, they can be grouped together in the same “race-equivalent” group. In order to reduce the task for reproducing race conditions, it is sufficient to check only one member of the group. In this way, the proposed method can significantly reduces the number of interleavings for testing while still capable of detecting the same race conditions. Furthermore, the proposed method extends the existing model of execution trace to identify and avoid generating infeasible interleavings due to dependency caused by lock/unlock and wait/notify mechanisms. Experimental results suggest that redundant interleavings can be identified and removed which leads to a significant reduction of test cases. We evaluated the proposed method against several concurrent Java programs. The experimental results for an open source program Apache Commons Pool show the number of test cases is reduced from 23, which is based on the existing Thread-Pair-Interleaving method (TPAIR), to only 2 by the proposed method. Moreover, for concurrent programs that contain infinite loops, the proposed method generates only a finite and very few numbers of test cases, while many existing methods generate an infinite number of test cases. The second is to reduce the memory space required for generating test cases. Redundant test cases were still generated by the existing reachability testing method even though there was no need to execute them. Here, we propose a new method by analyzing data dependency to generate only those test cases that might affect sequences of lock/unlock and read/write operations to shared variables. The experimental results for the Apache Commons Pool show that the size of the graph for creating the test cases is reduced from 990 nodes, as based on the reachability testing method used in our previous work, to only 4 nodes by our new method. The third improvement is to reduce the effort involved in checking race conditions by utilizing previous test results. Existing work requires checking race conditions in the whole execution trace for every new test case. The proposed method can identify only those parts of the execution trace in which the sequence of lock/unlock and read/write operations to shared variables might be affected by a new test case, thus necessitating that race conditions be rechecked only for those affected parts. From the new improvements introduced above, the proposed methods accomplish to significantly reduce the efforts for exhaustively checking all possible interleavings. The proposed methods provide programmers the information regarding whether there exist program errors caused by interleavings, the interleaving (path) when the errors occurred, and accesses to shared variables with inconsistent locking.電気通信大学201
Performance Optimization Strategies for Transactional Memory Applications
This thesis presents tools for Transactional Memory (TM) applications that cover multiple TM systems (Software, Hardware, and hybrid TM) and use information of all different layers of the TM software stack. Therefore, this thesis addresses a number of challenges to extract static information, information about the run time behavior, and expert-level knowledge to develop these new methods and strategies for the optimization of TM applications
The exploitation of parallelism on shared memory multiprocessors
PhD ThesisWith the arrival of many general purpose shared memory multiple processor
(multiprocessor) computers into the commercial arena during the mid-1980's, a
rift has opened between the raw processing power offered by the emerging
hardware and the relative inability of its operating software to effectively deliver
this power to potential users. This rift stems from the fact that, currently, no
computational model with the capability to elegantly express parallel activity is
mature enough to be universally accepted, and used as the basis for programming
languages to exploit the parallelism that multiprocessors offer. To add to this,
there is a lack of software tools to assist programmers in the processes of designing
and debugging parallel programs.
Although much research has been done in the field of programming languages,
no undisputed candidate for the most appropriate language for programming
shared memory multiprocessors has yet been found. This thesis examines why this
state of affairs has arisen and proposes programming language constructs,
together with a programming methodology and environment, to close the ever
widening hardware to software gap.
The novel programming constructs described in this thesis are intended for use
in imperative languages even though they make use of the synchronisation
inherent in the dataflow model by using the semantics of single assignment when
operating on shared data, so giving rise to the term shared values. As there are
several distinct parallel programming paradigms, matching flavours of shared
value are developed to permit the concise expression of these paradigms.The Science and Engineering Research Council
Recommended from our members
Elixir: synthesis of parallel irregular algorithms
Algorithms in new application areas like machine learning and data analytics usually operate on unstructured sparse graphs. Writing efficient parallel code to implement these algorithms is very challenging for a number of reasons.
First, there may be many algorithms to solve a problem and each algorithm may have many implementations. Second, synchronization, which is necessary for correct parallel execution, introduces potential problems such as data-races and deadlocks. These issues interact in subtle ways, making the best solution dependent both on the parallel platform and on properties of the input graph. Consequently, implementing and selecting the best parallel solution can be a daunting task for non-experts, since we have few performance models for predicting the performance of parallel sparse graph programs on parallel hardware.
This dissertation presents a synthesis methodology and a system, Elixir, that addresses these problems by (i) allowing programmers to specify solutions at a high level of abstraction, and (ii) generating many parallel implementations automatically and using search to find the best one. An Elixir specification consists of a set of operators capturing the main algorithm logic and a schedule specifying how to efficiently apply the operators. Elixir employs sophisticated automated reasoning to merge these two components, and uses techniques based on automated planning to insert synchronization and synthesize efficient parallel code.
Experimental evaluation of our approach demonstrates that the performance of the Elixir generated code is competitive to, and can even outperform, hand-optimized code written by expert programmers for many interesting graph benchmarks.Computer Science
Validity contracts for software transactions
Software Transactional Memory is a promising approach to concurrent programming, freeing programmers from error-prone concurrency control decisions that are complicated and not composable. But few such systems address consistencies of transactional objects.
In this thesis, I propose a contract-based transactional programming model toward more secure transactional softwares. In this general model, a validity contract specifies both requirements and effects for transactions. Validity contracts bring numerous benefits including reasoning about and verifying transactional programs, detecting and resolving transactional conflicts, automating object revalidation and easing program debugging.
I introduce an ownership-based framework, namely AVID, derived from the general model, using object ownership as a mechanism for specifying and reasoning validity contracts. I have specified a formal type system and implemented a prototype type checker to support static checking. I also have built a transactional library framework AVID, based on existing Java DSTM2 framework, for expressing transactions and validity contracts.
Experimental results on a multi-core system show that contracts add little overheads to the original STM. I find that contract-aware contention management yields significant speedups in some cases. The results have suggested compiler directed optimisation for tunning contract-based transactional programs. My further work will investigate the applications of transaction contracts on various aspects of TM research such as hardware support and open-nesting
Recommended from our members
WALKING FOR OBJECT TRANSPORT: AN EXAMINATION OF THE COORDINATIVE ADAPTATIONS TO LOCOMOTOR, PERCEPTUAL, AND MANUAL TASK CONSTRAINTS
The goal of this dissertation was to understand how the intrinsic dynamics of gait adapt to support the performance of an ecologically relevant object transport task. A common object transport task is walking with a cup of water. Because the water can move relatively independent of the cup, the cup and water system is classified as a complex object. To model this task participants carried a cup with a wooden lid placed on top. On the lid there was a circular region with the same circumference as the cup and a ball. The object of the task was to keep the ball inside the circular region. We explored two questions: 1) how do the intrinsic coordinative gait dynamics adapt to support object transport during walking? And 2) how do individuals adapt to manually control a complex object when asked to concurrently attend to visual information?
To address question 1, participants walked on a treadmill at six speeds (0.4 - 1.4 m/s) and performed three conditions: normal walking, walking with a cup only (Cup), and walking with the cup and ball (Cup-Ball). When performing the Cup-Ball condition, as gait speed increased, pelvis-thorax coordination was more in-phase compared to the normal walking and the Cup conditions. Arm-leg coordination was affected by the performance of the Cup-Ball condition. On the constrained side arm-leg coordination was 2:1 while a 1:1 relationship was maintained on the unconstrained side. A correlation between the amplitude of the unconstrained arm and manual task performance revealed a significant negative correlation as gait speed increased, indicating that individuals who reduced their arm swing performed better. To address question 2, participants walked on a treadmill at three gait speeds under four task conditions: normal walking, walking with the cup and ball system (Cup-Ball), walking while identifying visual stimuli (Visual), and a combined condition where participants walked with the cup and ball system while identifying visual stimuli (Cup-Ball-Vis). The addition of the visual task in study 2 resulted in the head orientation to be more extended relative to the trunk with a larger range of motion compared to the manual task only condition; participants optimized on the visual task at the expense of manual task performance. In both manual task conditions pelvis-thorax coordination was more in-phase as gait speed increased and more variable compared to the walking only condition. The latter result demonstrates the functionality of increased coordination variability during object transport tasks. The amplitude of the unconstrained arm decreased as the system became more constrained (i.e., going from walking only to Cup-Ball to Cup-Ball-Vis tasks). Although the arm amplitude decreased, the unconstrained arm maintained a 1:1 arm-leg coordination while the constrained arm was in a 2:1 relationship for both manual task conditions. This result demonstrates that the unconstrained arm continues to move to counteract angular momentum imparted by the legs while the arm carrying the object is coupled to the step frequency, counteracting disturbances imposed by heel contacts.
The overall results from both studies demonstrate that the body’s natural walking dynamics adapt to support manual task performance. The segments not directly involved in the task continue to interact to maintain intrinsic gait dynamics. This dissertation makes significant contributions to the literature by demonstrating: 1) asymmetries in arm-leg coordination are exploited by the body to maintain manual task performance and intrinsic gait dynamics; 2) amplitude of the freely swinging arm is an important factor in task performance during object transport; and 3) increased variability at the level of the pelvis-thorax interaction plays a functional role in maintaining both manual and visual task performance. The significance of the findings here is that they demonstrate how task constraints alter intrinsic coordination dynamics during walking in order to support performance while at the same time maintaining gait stabilit
- …