161 research outputs found
Soft Error Vulnerability of Iterative Linear Algebra Methods
Devices become increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft errors primarily caused problems for space and high-atmospheric computing applications. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming significant even at terrestrial altitudes. The soft error vulnerability of iterative linear algebra methods, which many scientific applications use, is a critical aspect of the overall application vulnerability. These methods are often considered invulnerable to many soft errors because they converge from an imprecise solution to a precise one. However, we show that iterative methods can be vulnerable to soft errors, with a high rate of silent data corruptions. We quantify this vulnerability, with algorithms generating up to 8.5% erroneous results when subjected to a single bit-flip. Further, we show that detecting soft errors in an iterative method depends on its detailed convergence properties and requires more complex mechanisms than simply checking the residual. Finally, we explore inexpensive techniques to tolerate soft errors in these methods
Recommended from our members
Formal Specification of the OpenMP Memory Model
OpenMP [1] is an important API for shared memory programming, combining shared memory's potential for performance with a simple programming interface. Unfortunately, OpenMP lacks a critical tool for demonstrating whether programs are correct: a formal memory model. Instead, the current official definition of the OpenMP memory model (the OpenMP 2.5 specification [1]) is in terms of informal prose. As a result, it is impossible to verify OpenMP applications formally since the prose does not provide a formal consistency model that precisely describes how reads and writes on different threads interact. This paper focuses on the formal verification of OpenMP programs through a proposed formal memory model that is derived from the existing prose model [1]. Our formalization provides a two-step process to verify whether an observed OpenMP execution is conformant. In addition to this formalization, our contributions include a discussion of ambiguities in the current prose-based memory model description. Although our formal model may not capture the current informal memory model perfectly, in part due to these ambiguities, our model reflects our understanding of the informal model's intent. We conclude with several examples that may indicate areas of the OpenMP memory model that need further refinement however it is specified. Our goal is to motivate the OpenMP community to adopt those refinements eventually, ideally through a formal model, in later OpenMP specifications
CLOMP: Accurately Characterizing OpenMP Application Overheads
Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC. Further, we show that CLOMP also captures limitations for OpenMP parallelization on NUMA systems
Toward Enhancing OpenMP's Work-Sharing Directives
OpenMP provides a portable programming interface for shared memory parallel computers (SMPs). Although this interface has proven successful for small SMPs, it requires greater flexibility in light of the steadily growing size of individual SMPs and the recent advent of multithreaded chips. In this paper, we describe two application development experiences that exposed these expressivity problems in the current OpenMP specification. We then propose mechanisms to overcome these limitations, including thread subteams and thread topologies. Thus, we identify language features that improve OpenMP application performance on emerging and large-scale platforms while preserving ease of programming
Bench-to-bedside review : targeting antioxidants to mitochondria in sepsis
Peer reviewedPublisher PD
Recommended from our members
Lessons learned at 208K: Towards Debugging Millions of Cores
Petascale systems will present several new challenges to performance and correctness tools. Such machines may contain millions of cores, requiring that tools use scalable data structures and analysis algorithms to collect and to process application data. In addition, at such scales, each tool itself will become a large parallel application--already, debugging the full Blue-Gene/L (BG/L) installation at the Lawrence Livermore National Laboratory requires employing 1664 tool daemons. To reach such sizes and beyond, tools must use a scalable communication infrastructure and manage their own tool processes efficiently. Some system resources, such as the file system, may also become tool bottlenecks. In this paper, we present challenges to petascale tool development, using the Stack Trace Analysis Tool (STAT) as a case study. STAT is a lightweight tool that gathers and merges stack traces from a parallel application to identify process equivalence classes. We use results gathered at thousands of tasks on an Infiniband cluster and results up to 208K processes on BG/L to identify current scalability issues as well as challenges that will be faced at the petascale. We then present implemented solutions to these challenges and show the resulting performance improvements. We also discuss future plans to meet the debugging demands of petascale machines
Diaphragm Muscle Weakness in an Experimental Porcine Intensive Care Unit Model
In critically ill patients, mechanisms underlying diaphragm muscle remodeling and resultant dysfunction contributing to weaning failure remain unclear. Ventilator-induced modifications as well as sepsis and administration of pharmacological agents such as corticosteroids and neuromuscular blocking agents may be involved. Thus, the objective of the present study was to examine how sepsis, systemic corticosteroid treatment (CS) and neuromuscular blocking agent administration (NMBA) aggravate ventilator-related diaphragm cell and molecular dysfunction in the intensive care unit. Piglets were exposed to different combinations of mechanical ventilation and sedation, endotoxin-induced sepsis, CS and NMBA for five days and compared with sham-operated control animals. On day 5, diaphragm muscle fibre structure (myosin heavy chain isoform proportion, cross-sectional area and contractile protein content) did not differ from controls in any of the mechanically ventilated animals. However, a decrease in single fibre maximal force normalized to cross-sectional area (specific force) was observed in all experimental piglets. Therefore, exposure to mechanical ventilation and sedation for five days has a key negative impact on diaphragm contractile function despite a preservation of muscle structure. Post-translational modifications of contractile proteins are forwarded as one probable underlying mechanism. Unexpectedly, sepsis, CS or NMBA have no significant additive effects, suggesting that mechanical ventilation and sedation are the triggering factors leading to diaphragm weakness in the intensive care unit
Acute reduction of serum 8-iso-PGF2-alpha and advanced oxidation protein products in vivo by a polyphenol-rich beverage; a pilot clinical study with phytochemical and in vitro antioxidant characterization
<p>Abstract</p> <p>Background</p> <p>Measuring the effects of the acute intake of natural products on human biomarker concentrations, such as those related to oxidation and inflammation, can be an advantageous strategy for early clinical research on an ingredient or product.</p> <p>Methods</p> <p>31 total healthy subjects were randomized in a double-blinded, placebo-controlled, acute pilot study with post-hoc subgroup analysis on 20 of the subjects. The study examined the effects of a single dose of a polyphenol-rich beverage (PRB), commercially marketed as "SoZo<sup>®</sup>", on serum anti-inflammatory and antioxidant markers. In addition, phytochemical analyses of PRB, and <it>in vitro </it>antioxidant capacity were also performed.</p> <p>Results</p> <p>At 1 hour post-intake, serum values for 8-iso-PGF2-alpha and advanced oxidation protein products decreased significantly by 40% and 39%, respectively. Additionally, there was a trend toward decreased C-reactive protein, and increased nitric oxide levels. Both placebo and PRB treatment resulted in statistically significant increases in hydroxyl radical antioxidant capacity (HORAC) compared to baseline; PRB showed a higher percent change (55-75% versus 23-74% in placebo group), but the two groups did not differ significantly from each other.</p> <p>Conclusions</p> <p>PRB produced statistically significant changes in several blood biomarkers related to antioxidant/anti-inflammatory effects. Future studies are justified to verify results and test for cumulative effects of repeated intakes of PRB. The study demonstrates the potential utility of acute biomarker measurements for evaluating antioxidant/anti-inflammatory effects of natural products.</p
- …