625 research outputs found

    Recent development and perspectives of machines for lattice QCD

    Full text link
    I highlight recent progress in cluster computer technology and assess status and prospects of cluster computers for lattice QCD with respect to the development of QCDOC and apeNEXT. Taking the LatFor test case, I specify a 512-processor QCD-cluster better than 1$/Mflops.Comment: 14 pages, 17 figures, Lattice2003(plenary

    The South African print media, 1994-2004 : an application and critique of comparative media systems theory

    Get PDF
    Includes bibliographical references (leaves 226-237)Daniel C Hallin and Paolo Mancini's Comparing Media Systems (2004) has been hailed as an important contribution to understanding the inter-relationship between the media and political systems. The work was, however, based on a study of 18 stable, mature and highly developed democracies either in Europe or in North America. As an emerging democracy that has recently undergone dramatic change in both its political system and its media, South Africa's inclusion poses particular challenges to Hallin and Mancini's Three Models paradigm. This thesis focuses on the South African print media and tests both the paradigm's theoretical underpinnings as well as its four principle dimensions of analysis: political parallelism, state intervention, development of a mass market and journalistic professionalisation. A range of insights and a number of modifications are proposed. This thesis is based on interviews with South Africa's most senior media executives and editors, a comprehensive study of the relevant literature and 15 years of personal experience as a political analyst, columnist and parliamentary correspondent covering South Africa's transition from apartheid to democracy. The thesis sheds new light on the functioning and applicability of the Three Models comparative paradigm as well as on the development and future trajectory of South African print media journalism

    JIT-based cost models for adaptive parallelism

    Get PDF
    Parallel programming is extremely challenging. Worse yet, parallel architectures evolve quickly, and parallel programs must often be refactored for each new architecture. It is highly desirable to provide performance portability, so programs developed on one architecture can deliver good performance on other architectures. This thesis is part of the AJITPar project that investigates a novel approach for achieving performance portability by the development of suitable cost models to inform scheduling decisions with dynamic information about computational and communication costs on the target architecture. The main artifact of the AJITPar project is the Adaptive Skeleton Library (ASL) that pro- vides a distributed-memory master-worker implementation of a set of Algorithmic Skeletons i.e. programming patterns that abstract away the low-level intricacies of parallelism. After JIT warm-up, ASL uses a computational cost model applied to JIT trace information from the Pycket compiler, a tracing JIT implementation of the Racket language, to transform the skeletons. The execution time of an ASL task is primarily determined by computation and communication costs. The Pycket compiler is extended to enable runtime access to JIT traces, both the sequences of instructions and frequency of execution. Crucially for dynamic, adaption these are obtained with minimal overhead. A low cost, dynamic computation cost model for estimating the runtime of JIT compiled Pycket programs, Γ, is developed and validated. This is believed to be the first such model. The design explores the challenges of estimating execution time from JIT trace instructions and presents three increasingly sophisticated cost models. The cost model predicts execution time based on the PyPy JIT instructions present in compiled JIT traces. The final abstract cost model applies weightings for 5 different classes of trace instructions and also proposes a method for aggregating the cost models for single traces into a cost model for an entire program. Execution time is measured, and traces generated are recorded, from a suite of 41 benchmarks. Linear regression is used to determine the weightings for the abstract cost model from this data. The final cost model reveals that allocation operations count most for execution time, followed by guards and numeric operations. The suitability of Γ for predicting the effect of ASL program transformations is investigated. The real utility of Γ is not in absolute predictions of execution times for different programs, but in predicting the effects of applying program transformations on parallel programs. A linear relationship between the actual computational cost for a task, and that predicted by Γ for five benchmarks on two architectures is demonstrated. A series of increasingly accurate low cost, dynamic cost models for estimating the communi- cation costs of ASL programs, K, are developed and validated. Predicting the optimum task size in ASL not only relies on computational cost predictions, but also predictions of the over- head of communicating tasks to worker nodes and results back to the master. The design and iterative development of a cost model which predicts the serialisation, deserialisation, and network send times of spawning a task in ASL is presented. Linear regression of communication timings are used to determine the appropriate weighting parameters for each. K is shown to be valid for predicting other, arbitrary data structures by demonstrating an additive property of the model. The model K is validated by showing a linear relationship between the combined predicted costs of the simple types in instances of aggregated data structures, and measured communication time. This validation is performed on five benchmarks on two platforms. Finally, a low cost dynamic cost model, T , that predicts a good ASL task size by combining information from the computation and communication cost models (Γand K) is developed and validated. The key insight in the design in this model is to balance the communications cost on the master node with the computational and communications cost on the worker nodes. The predictive power of T is tested model using six benchmarks, and it is shown to more accurately predict the optimal task size, reducing total program runtimes when compared with the default ASL prototype
    corecore