28 research outputs found

    Automatic Parallelization With Statistical Accuracy Bounds

    Get PDF
    Traditional parallelizing compilers are designed to generate parallel programs that produce identical outputs as the original sequential program. The difficulty of performing the program analysis required to satisfy this goal and the restricted space of possible target parallel programs have both posed significant obstacles to the development of effective parallelizing compilers. The QuickStep compiler is instead designed to generate parallel programs that satisfy statistical accuracy guarantees. The freedom to generate parallel programs whose output may differ (within statistical accuracy bounds) from the output of the sequential program enables a dramatic simplification of the compiler and a significant expansion in the range of parallel programs that it can legally generate. QuickStep exploits this flexibility to take a fundamentally different approach from traditional parallelizing compilers. It applies a collection of transformations (loop parallelization, loop scheduling, synchronization introduction, and replication introduction) to generate a search space of parallel versions of the original sequential program. It then searches this space (prioritizing the parallelization of the most time-consuming loops in the application) to find a final parallelization that exhibits good parallel performance and satisfies the statistical accuracy guarantee. At each step in the search it performs a sequence of trial runs on representative inputs to examine the performance, accuracy, and memory accessing characteristics of the current generated parallel program. An analysis of these characteristics guides the steps the compiler takes as it explores the search space of parallel programs. Results from our benchmark set of applications show that QuickStep can automatically generate parallel programs with good performance and statistically accurate outputs. For two of the applications, the parallelization introduces noise into the output, but the noise remains within acceptable statistical bounds. The simplicity of the compilation strategy and the performance and statistical acceptability of the generated parallel programs demonstrate the advantages of the QuickStep approach

    Parallelizing Sequential Programs With Statistical Accuracy Tests

    Get PDF
    We present QuickStep, a novel system for parallelizing sequential programs. QuickStep deploys a set of parallelization transformations that together induce a search space of candidate parallel programs. Given a sequential program, representative inputs, and an accuracy requirement, QuickStep uses performance measurements, profiling information, and statistical accuracy tests on the outputs of candidate parallel programs to guide its search for a parallelizationthat maximizes performance while preserving acceptable accuracy. When the search completes, QuickStep produces an interactive report that summarizes the applied parallelization transformations, performance, and accuracy results for the automatically generated candidate parallel programs. In our envisioned usage scenarios, the developer examines this report to evaluate the acceptability of the final parallelization and to obtain insight into how the original sequential program responds to different parallelization strategies. Itis also possible for the developer (or even a user of the program who has no software development expertise whatsoever) to simply use the best parallelization out of the box without examining the report or further investigating the parallelization. Results from our benchmark set of applications show that QuickStep can automatically generate accurate and efficient parallel programs---the automatically generated parallel versions of five of our six benchmark applications run between 5.0 and 7.7 times faster on 8 cores than the original sequential versions. Moreover, a comparison with the Intel icc compiler highlights how QuickStep can effectively parallelize applications with features (such as the use of modern object-oriented programming constructs or desirable parallelizations with infrequent but acceptable data races) that place them inherently beyond the reach of standard approaches

    Reasoning about Relaxed Programs

    Get PDF
    A number of approximate program transformations have recently emerged that enable transformed programs to trade accuracy of their results for increased performance by dynamically and nondeterministically modifying variables that control program execution. We call such transformed programs relaxed programs -- they have been extended with additional nondeterminism to relax their semantics and offer greater execution flexibility. We present programming language constructs for developing relaxed programs and proof rules for reasoning about properties of relaxed programs. Our proof rules enable programmers to directly specify and verify acceptability properties that characterize the desired correctness relationships between the values of variables in a program's original semantics (before transformation) and its relaxed semantics. Our proof rules also support the verification of safety properties (which characterize desirable properties involving values in individual executions). The rules are designed to support a reasoning approach in which the majority of the reasoning effort uses the original semantics. This effort is then reused to establish the desired properties of the program under the relaxed semantics. We have formalized the dynamic semantics of our target programming language and the proof rules in Coq, and verified that the proof rules are sound with respect to the dynamic semantics. Our Coq implementation enables developers to obtain fully machine checked verifications of their relaxed programs

    Verification of semantic commutativity conditions and inverse operations on linked data structures

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 57-61).We present a new technique for verifying commutativity conditions, which are logical formulas that characterize when operations commute. Because our technique reasons with the abstract state of verified linked data structure implementations, it can verify commuting operations that produce semantically equivalent (but not necessarily identical) data structure states in different execution orders. We have used this technique to verify sound and complete commutativity conditions for all pairs of operations on a collection of linked data structure implementations, including data structures that export a set interface (ListSet and HashSet) as well as data structures that export a map interface (AssociationList, HashTable, and ArrayList). This effort involved the specification and verification of 765 commutativity conditions. Many speculative parallel systems need to undo the effects of speculatively executed operations. Inverse operations, which undo these effects, are often more efficient than alternate approaches (such as saving and restoring data structure state). We present a new technique for verifying such inverse operations. We have specified and verified, for all of our linked data structure implementations, an inverse operation for every operation that changes the data structure state. Together, the commutativity conditions and inverse operations provide a key resource that language designers, developers of program analysis systems, and implementors of software systems can draw on to build languages, program analyses, and systems with strong correctness guarantees.by Deokhwan Kim.S.M

    Proving acceptability properties of relaxed nondeterministic approximate programs

    Get PDF
    Approximate program transformations such as skipping tasks [29, 30], loop perforation [21, 22, 35], reduction sampling [38], multiple selectable implementations [3, 4, 16, 38], dynamic knobs [16], synchronization elimination [20, 32], approximate function memoization [11],and approximate data types [34] produce programs that can execute at a variety of points in an underlying performance versus accuracy tradeoff space. These transformed programs have the ability to trade accuracy of their results for increased performance by dynamically and nondeterministically modifying variables that control their execution. We call such transformed programs relaxed programs because they have been extended with additional nondeterminism to relax their semantics and enable greater flexibility in their execution. We present language constructs for developing and specifying relaxed programs. We also present proof rules for reasoning about properties [28] which the program must satisfy to be acceptable. Our proof rules work with two kinds of acceptability properties: acceptability properties [28], which characterize desired relationships between the values of variables in the original and relaxed programs, and unary acceptability properties, which involve values only from a single (original or relaxed) program. The proof rules support a staged reasoning approach in which the majority of the reasoning effort works with the original program. Exploiting the common structure that the original and relaxed programs share, relational reasoning transfers reasoning effort from the original program to prove properties of the relaxed program. We have formalized the dynamic semantics of our target programming language and the proof rules in Coq and verified that the proof rules are sound with respect to the dynamic semantics. Our Coq implementation enables developers to obtain fully machine-checked verifications of their relaxed programs.National Science Foundation (U.S.). (Grant number CCF-0811397)National Science Foundation (U.S.). (Grant number CCF-0905244)National Science Foundation (U.S.). (Grant number CCF-1036241)National Science Foundation (U.S.). (Grant number IIS-0835652)United States. Defense Advanced Research Projects Agency (Grant number FA8650-11-C-7192)United States. Defense Advanced Research Projects Agency (Grant number FA8750-12-2-0110)United States. Dept. of Energy. (Grant Number DE-SC0005288

    Verification of Semantic Commutativity Conditions and Inverse Operations on Linked Data Structures

    Get PDF
    Commuting operations play a critical role in many parallel computing systems. We present a new technique for verifying commutativity conditions, which are logical formulas that characterize when operations commute. Because our technique reasons with the abstract state of verified linked data structure implementations, it can verify commuting operations that produce semantically equivalent (but not identical) data structure states in different execution orders. We have used this technique to verify sound and complete commutativity conditions for all pairs of operations on a collection of linked data structure implementations, including data structures that export a set interface (ListSet and HashSet) as well as data structures that export a map interface (AssociationList, HashTable, and ArrayList). This effort involved the specification and verification of 765 commutativity conditions. Many speculative parallel systems need to undo the effects of speculatively executed operations. Inverse operations, which undo these effects, are often more efficient than alternate approaches (such as saving and restoring data structure state). We present a new technique for verifying such inverse operations. We have specified and verified, for all of our linked data structure implementations, an inverse operation for every operation that changes the data structure state. Together, the commutativity conditions and inverse operations provide a key resource that language designers and system developers can draw on to build parallel languages and systems with strong correctness guarantees

    Apoptotic Cells Trigger Calcium Entry in Phagocytes by Inducing the Orai1-STIM1 Association

    No full text
    Swift and continuous phagocytosis of apoptotic cells can be achieved by modulation of calcium flux in phagocytes. However, the molecular mechanism by which apoptotic cells modulate calcium flux in phagocytes is incompletely understood. Here, using biophysical, biochemical, pharmaceutical, and genetic approaches, we show that apoptotic cells induced the Orai1-STIM1 interaction, leading to store-operated calcium entry (SOCE) in phagocytes through the Mertk-phospholipase C (PLC) γ1-inositol 1,4,5-triphosphate receptor (IP3R) axis. Apoptotic cells induced calcium release from the endoplasmic reticulum, which led to the Orai1-STIM1 association and, consequently, SOCE in phagocytes. This association was attenuated by masking phosphatidylserine. In addition, the depletion of Mertk, which indirectly senses phosphatidylserine on apoptotic cells, reduced the phosphorylation levels of PLCγ1 and IP3R, resulting in attenuation of the Orai1-STIM1 interaction and inefficient SOCE upon apoptotic cell stimulation. Taken together, our observations uncover the mechanism of how phagocytes engulfing apoptotic cells elevate the calcium level

    A Case Study: Groundwater Level Forecasting of the Gyorae Area in Actual Practice on Jeju Island Using Deep-Learning Technique

    No full text
    As a significant portion of the available water resources in volcanic terrains such as Jeju Island are dependent on groundwater, reliable groundwater level forecasting is one of the important tasks for efficient water resource management. This study aims to propose deep-learning-based methods for groundwater level forecasting that can be utilized in actual management works and to assess their applicability. The study suggests practical forecasting methodologies through the Gyorae area of Jeju Island, where the groundwater level is highly volatile and unpredictable. To this end, the groundwater level data of the JH Gyorae-1 point and a total of 12 kinds of daily hydro-meteorological data from 2012 to 2021 were collected. Subsequently, five factors (i.e., mean wind speed, sun hours, evaporation, minimum temperature, and daily precipitation) were selected as hydro-meteorological data for groundwater level forecasting through cross-wavelet analysis between the collected hydro-meteorological data and groundwater level data. The study simulated the groundwater level of the JH Gyorae-1 point using the long short-term memory (LSTM) model, a representative deep-learning technique, with the selected data to show that the methodology is adequately applicable. In addition, for its better utilization in actual practice, the study suggests and analyzes (i) a derivatives-based groundwater level learning model which is defined as derivatives-based learning to forecast derivatives (gradients) of the groundwater level, not the target groundwater time series itself, and (ⅱ) an ensemble forecasting methodology in which groundwater level forecasting is performed repetitively with short time intervals
    corecore