39 research outputs found

    Measuring NUMA effects with the STREAM benchmark

    Full text link
    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine

    Retrofitting parallelism onto OCaml.

    Get PDF
    OCaml is an industrial-strength, multi-paradigm programming language, widely used in industry and academia. OCaml is also one of the few modern managed system programming languages to lack support for shared memory parallel programming. This paper describes the design, a full-fledged implementation and evaluation of a mostly-concurrent garbage collector (GC) for the multicore extension of the OCaml programming language. Given that we propose to add parallelism to a widely used programming language with millions of lines of existing code, we face the challenge of maintaining backwards compatibility--not just in terms of the language features but also the performance of single-threaded code running with the new GC. To this end, the paper presents a series of novel techniques and demonstrates that the new GC strikes a balance between performance and feature backwards compatibility for sequential programs and scales admirably on modern multicore processors

    The Cord Weekly (November 4, 1998)

    Get PDF

    Cautiously Optimistic Program Analyses for Secure and Reliable Software

    Full text link
    Modern computer systems still have various security and reliability vulnerabilities. Well-known dynamic analyses solutions can mitigate them using runtime monitors that serve as lifeguards. But the additional work in enforcing these security and safety properties incurs exorbitant performance costs, and such tools are rarely used in practice. Our work addresses this problem by constructing a novel technique- Cautiously Optimistic Program Analysis (COPA). COPA is optimistic- it infers likely program invariants from dynamic observations, and assumes them in its static reasoning to precisely identify and elide wasteful runtime monitors. The resulting system is fast, but also ensures soundness by recovering to a conservatively optimized analysis when a likely invariant rarely fails at runtime. COPA is also cautious- by carefully restricting optimizations to only safe elisions, the recovery is greatly simplified. It avoids unbounded rollbacks upon recovery, thereby enabling analysis for live production software. We demonstrate the effectiveness of Cautiously Optimistic Program Analyses in three areas: Information-Flow Tracking (IFT) can help prevent security breaches and information leaks. But they are rarely used in practice due to their high performance overhead (>500% for web/email servers). COPA dramatically reduces this cost by eliding wasteful IFT monitors to make it practical (9% overhead, 4x speedup). Automatic Garbage Collection (GC) in managed languages (e.g. Java) simplifies programming tasks while ensuring memory safety. However, there is no correct GC for weakly-typed languages (e.g. C/C++), and manual memory management is prone to errors that have been exploited in high profile attacks. We develop the first sound GC for C/C++, and use COPA to optimize its performance (16% overhead). Sequential Consistency (SC) provides intuitive semantics to concurrent programs that simplifies reasoning for their correctness. However, ensuring SC behavior on commodity hardware remains expensive. We use COPA to ensure SC for Java at the language-level efficiently, and significantly reduce its cost (from 24% down to 5% on x86). COPA provides a way to realize strong software security, reliability and semantic guarantees at practical costs.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/170027/1/subarno_1.pd

    Model-based Evaluation: from Dependability Theory to Security

    Get PDF
    How to quantify security is a classic question in the security community that until today has had no plausible answer. Unfortunately, current security evaluation models are often either quantitative but too specific (i.e., applicability is limited), or comprehensive (i.e., system-level) but qualitative. The importance of quantifying security cannot be overstated, but doing so is difficult and complex, for many reason: the “physics” of the amount of security is ambiguous; the operational state is defined by two confronting parties; protecting and breaking systems is a cross-disciplinary mechanism; security is achieved by comparable security strength and breakable by the weakest link; and the human factor is unavoidable, among others. Thus, security engineers face great challenges in defending the principles of information security and privacy. This thesis addresses model-based system-level security quantification and argues that properly addressing the quantification problem of security first requires a paradigm shift in security modeling, addressing the problem at the abstraction level of what defines a computing system and failure model, before any system-level analysis can be established. Consequently, we present a candidate computing systems abstraction and failure model, then propose two failure-centric model-based quantification approaches, each including a bounding system model, performance measures, and evaluation techniques. The first approach addresses the problem considering the set of controls. To bound and build the logical network of a security system, we extend our original work on the Information Security Maturity Model (ISMM) with Reliability Block Diagrams (RBDs), state vectors, and structure functions from reliability engineering. We then present two different groups of evaluation methods. The first mainly addresses binary systems, by extending minimal path sets, minimal cut sets, and reliability analysis based on both random events and random variables. The second group addresses multi-state security systems with multiple performance measures, by extending Multi-state Systems (MSSs) representation and the Universal Generating Function (UGF) method. The second approach addresses the quantification problem when the two sets of a computing system, i.e., assets and controls, are considered. We adopt a graph-theoretic approach using Bayesian Networks (BNs) to build an asset-control graph as the candidate bounding system model, then demonstrate its application in a novel risk assessment method with various diagnosis and prediction inferences. This work, however, is multidisciplinary, involving foundations from many fields, including security engineering; maturity models; dependability theory, particularly reliability engineering; graph theory, particularly BNs; and probability and stochastic models
    corecore