34 research outputs found

    Iso-energy-efficiency: An approach to power-constrained parallel computation

    Get PDF
    Future large scale high performance supercomputer systems require high energy efficiency to achieve exaflops computational power and beyond. Despite the need to understand energy efficiency in high-performance systems, there are few techniques to evaluate energy efficiency at scale. In this paper, we propose a system-level iso-energy-efficiency model to analyze, evaluate and predict energy-performance of data intensive parallel applications with various execution patterns running on large scale power-aware clusters. Our analytical model can help users explore the effects of machine and application dependent characteristics on system energy efficiency and isolate efficient ways to scale system parameters (e.g. processor count, CPU power/frequency, workload size and network bandwidth) to balance energy use and performance. We derive our iso-energy-efficiency model and apply it to the NAS Parallel Benchmarks on two power-aware clusters. Our results indicate that the model accurately predicts total system energy consumption within 5% error on average for parallel applications with various execution and communication patterns. We demonstrate effective use of the model for various application contexts and in scalability decision-making

    Grosch's law: a statistical illusion?.

    Get PDF
    In this paper a central law on economies of scale in computer hardware pricing, Grosch's law is discussed. The history and various validation efforts are examined in detail. It is shown how the last set of validations during the eighties may be interpreted as a statistical misinterpretation, although this effect may have been present in all validation attempts, including the earliest ones. Simulation experiments reveal that constant returns to scale in combination with decreasing computer prices may give the illusion of Grosch's law when performing regression models against computer prices over many years. The paper also shows how the appropriate definition of computer capacity, and in particular Kleinrock's power definition, plays a central role in economies of scale for computer prices.Law;

    An agent-based visualisation system.

    Get PDF
    This thesis explores the concepts of visual supercomputing, where complex distributed systems are used toward interactive visualisation of large datasets. Such complex systems inherently trigger management and optimisation problems; in recent years the concepts of autonomic computing have arisen to address those issues. Distributed visualisation systems are a very challenging area to apply autonomic computing ideas as such systems are both latency and compute sensitive, while most autonomic computing implementations usually concentrate on one or the other but not both concurrently. A major contribution of this thesis is to provide a case study demonstrating the application of autonomic computing concepts to a computation intensive, real-time distributed visualisation system. The first part of the thesis proposes the realisation of a layered multi-agent system to enable autonomic visualisation. The implementation of a generic multi-agent system providing reflective features is described. This architecture is then used to create a flexible distributed graphic pipeline, oriented toward real-time visualisation of volume datasets. Performance evaluation of the pipeline is presented. The second part of the thesis explores the reflective nature of the system and presents high level architectures based on software agents, or visualisation strategies, that take advantage of the flexibility of the system to provide generic features. Autonomic capabilities are presented, with fault recovery and automatic resource configuration. Performance evaluation, simulation and prediction of the system are presented, exploring different use cases and optimisation scenarios. A performance exploration tool, Delphe, is described, which uses real-time data of the system to let users explore its performance

    Using program behaviour to exploit heterogeneous multi-core processors

    Get PDF
    Multi-core CPU architectures have become prevalent in recent years. A number of multi-core CPUs consist of not only multiple processing cores, but multiple different types of processing cores, each with different capabilities and specialisations. These heterogeneous multi-core architectures (HMAs) can deliver exceptional performance; however, they are notoriously difficult to program effectively. This dissertation investigates the feasibility of ameliorating many of the difficulties encountered in application development on HMA processors, by employing a behaviour aware runtime system. This runtime system provides applications with the illusion of executing on a homogeneous architecture, by presenting a homogeneous virtual machine interface. The runtime system uses knowledge of a program's execution behaviour, gained through explicit code annotations, static analysis or runtime monitoring, to inform its resource allocation and scheduling decisions, such that the application makes best use of the HMA's heterogeneous processing cores. The goal of this runtime system is to enable non-specialist application developers to write applications that can exploit an HMA, without the developer requiring in-depth knowledge of the HMA's design. This dissertation describes the development of a Java runtime system, called Hera-JVM, aimed at investigating this premise. Hera-JVM supports the execution of unmodified Java applications on both processing core types of the heterogeneous IBM Cell processor. An application's threads of execution can be transparently migrated between the Cell's different core types by Hera-JVM, without requiring the application's involvement. A number of real-world Java benchmarks are executed across both of the Cell's core types, to evaluate the efficacy of abstracting a heterogeneous architecture behind a homogeneous virtual machine. By characterising the performance of each of the Cell processor's core types under different program behaviours, a set of influential program behaviour characteristics is uncovered. A set of code annotations are presented, which enable program code to be tagged with these behaviour characteristics, enabling a runtime system to track a program's behaviour throughout its execution. This information is fed into a cost function, which Hera-JVM uses to automatically estimate whether the executing program's threads of execution would benefit from being migrated to a different core type, given their current behaviour characteristics. The use of history, hysteresis and trend tracking, by this cost function, is explored as a means of increasing its stability and limiting detrimental thread migrations. The effectiveness of a number of different migration strategies is also investigated under real-world Java benchmarks, with the most effective found to be a strategy that can target code, such that a thread is migrated whenever it executes this code. This dissertation also investigates the use of runtime monitoring to enable a runtime system to automatically infer a program's behaviour characteristics, without the need for explicit code annotations. A lightweight runtime behaviour monitoring system is developed, and its effectiveness at choosing the most appropriate core type on which to execute a set of real-world Java benchmarks is examined. Combining explicit behaviour characteristic annotations with those characteristics which are monitored at runtime is also explored. Finally, an initial investigation is performed into the use of behaviour characteristics to improve application performance under a different type of heterogeneous architecture, specifically, a non-uniform memory access (NUMA) architecture. Thread teams are proposed as a method of automatically clustering communicating threads onto the same NUMA node, thereby reducing data access overheads. Evaluation of this approach shows that it is effective at improving application performance, if the application's threads can be partitioned across the available NUMA nodes of a system. The findings of this work demonstrate that a runtime system with a homogeneous virtual machine interface can reduce the challenge of application development for HMA processors, whilst still being able to exploit such a processor by taking program behaviour into account

    Parallel solution of power system linear equations

    Get PDF
    At the heart of many power system computations lies the solution of a large sparse set of linear equations. These equations arise from the modelling of the network and are the cause of a computational bottleneck in power system analysis applications. Efficient sequential techniques have been developed to solve these equations but the solution is still too slow for applications such as real-time dynamic simulation and on-line security analysis. Parallel computing techniques have been explored in the attempt to find faster solutions but the methods developed to date have not efficiently exploited the full power of parallel processing. This thesis considers the solution of the linear network equations encountered in power system computations. Based on the insight provided by the elimination tree, it is proposed that a novel matrix structure is adopted to allow the exploitation of parallelism which exists within the cutset of a typical parallel solution. Using this matrix structure it is possible to reduce the size of the sequential part of the problem and to increase the speed and efficiency of typical LU-based parallel solution. A method for transforming the admittance matrix into the required form is presented along with network partitioning and load balancing techniques. Sequential solution techniques are considered and existing parallel methods are surveyed to determine their strengths and weaknesses. Combining the benefits of existing solutions with the new matrix structure allows an improved LU-based parallel solution to be derived. A simulation of the improved LU solution is used to show the improvements in performance over a standard LU-based solution that result from the adoption of the new techniques. The results of a multiprocessor implementation of the method are presented and the new method is shown to have a better performance than existing methods for distributed memory multiprocessors

    Processamento de imagens médicas usando GPU

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaA aplicação CapView utiliza um algoritmo de classificação baseado em SVM (Support Vector Machines) para automatizar a segmentação topográfica de vídeos do trato intestinal obtidos por cápsula endoscópica. Este trabalho explora a aplicação de processadores gráficos (GPU) para execução paralela desse algoritmo. Após uma etapa de otimização da versão sequencial, comparou-se o desempenho obtido por duas abordagens: (1) desenvolvimento apenas do código do lado do host, com suporte em bibliotecas especializadas para a GPU, e (2) desenvolvimento de todo o código, incluindo o que é executado no GPU. Ambas permitiram ganhos (speedups) significativos, entre 1,4 e 7 em testes efetuados com GPUs individuais de vários modelos. Usando um cluster de 4 GPU do modelo de maior capacidade, conseguiu-se, em todos os casos testados, ganhos entre 26,2 e 27,2 em relação à versão sequencial otimizada. Os métodos desenvolvidos foram integrados na aplicação CapView, utilizada em rotina em ambientes hospitalares.The CapView application uses a classification algorithm based on SVMs (Support Vector Machines) for automatic topographic segmentation of gastrointestinal tract videos obtained through capsule endoscopy. This work explores the use graphic processors (GPUs) to parallelize the segmentation algorithm. After an optimization phase of the sequential version, two new approaches were analyzed: (1) development of the host code only, with support of specialized libraries for the GPU, and (2) development of the host and the device’s code. The two approaches caused substantial gains, with speedups between 1.4 and 7 times in tests made with several different individual GPUs. In a cluster of 4 GPUs of the most capable model, speedups between 26.2 and 27.2 times were achieved, compared to the optimized sequential version. The methods developed were integrated in the CapView application, used in routine in medical environments

    A hierarchical optimization engine for nanoelectronic systems using emerging device and interconnect technologies

    Get PDF
    A fast and efficient hierarchical optimization engine was developed to benchmark and optimize various emerging device and interconnect technologies and system-level innovations at the early design stage. As the semiconductor industry approaches sub-20nm technology nodes, both devices and interconnects are facing severe physical challenges. Many novel device and interconnect concepts and system integration techniques are proposed in the past decade to reinforce or even replace the conventional Si CMOS technology and Cu interconnects. To efficiently benchmark and optimize these emerging technologies, a validated system-level design methodology is developed based on the compact models from all hierarchies, starting from the bottom material-level, to the device- and interconnect-level, and to the top system-level models. Multiple design parameters across all hierarchies are co-optimized simultaneously to maximize the overall chip throughput instead of just the intrinsic delay or energy dissipation of the device or interconnect itself. This optimization is performed under various constraints such as the power dissipation, maximum temperature, die size area, power delivery noise, and yield. For the device benchmarking, novel graphen PN junction devices and InAs nanowire FETs are investigated for both high-performance and low-power applications. For the interconnect benchmarking, a novel local interconnect structure and hybrid Al-Cu interconnect architecture are proposed, and emerging multi-layer graphene interconnects are also investigated, and compared with the conventional Cu interconnects. For the system-level analyses, the benefits of the systems implemented with 3D integration and heterogeneous integration are analyzed. In addition, the impact of the power delivery noise and process variation for both devices and interconnects are quantified on the overall chip throughput.Ph.D

    The application of parallel processing techniques to computationally intensive biomedical imaging studies

    Get PDF
    The landscape of modern computing is changing. While Moore’s law is currently holding and the number of transistors that can be produced on a given area of a chip is still growing exponentially, the practice of improving performance by increasing the clock frequency of a single processor is reaching its limit. Instead, the focus has shifted to applying multiple processors to solving a single problem, a methodology known as parallel processing. Parallel processing has the potential to overcome many of the shortcomings of linear processing, but also presents a number of unique challenges. This dissertation explores the potential benefits of parallel processing by examining the application of a near-field coded aperture simulator on a parallel cluster and contrasting its implementation and performance with previously written simulators for serial processors. The platform used is a cluster of Sony PlayStation 3’s; featuring the IBM developed Cell Broadband Engine Architecture. It was found that the PS3’s were capable of producing performance gains of around forty times an equivalently priced conventional processor, with the capability of easily scaling the system by adding or removing nodes as required. However, this comes at the cost of a much increased burden on the developer. Apart from the core application, a great deal of code must be written to handle communication and synchronization between nodes, a task which can at times be very complex. In addition, a number of tools available for serial processors, such as highly efficient compilers, advanced development environments and many standardized libraries cannot be applied in a parallel environment. The main conclusions drawn from this research are that while the potential gains of parallel processing are enormous, allowing attainable solutions to problems that were previously too costly, the costs of development are prohibitive. Still, parallel processing is the natural next step in modern computing, and it is only a matter of time before its idiosyncrasies are solved

    Desempenho de um algoritmo multigrid paralelo aplicado à equação de Laplace

    Get PDF
    Resumo: Entre os métodos mais eficientes empregados na solução de sistemas de equações estão os métodos multigrid. Apesar de numericamente eficientes, a solução de sistemas de equações com um grande número de incógnitas pode resultar em elevado tempo de CPU, visto que normalmente apresentam tempo de processamento proporcional ao número destas. Uma possível solução para este problema é a paralelização destes métodos através do particionamento do domínio em subdomínios menores (menos incógnitas). Neste trabalho foi resolvido numericamente o problema de condução de calor bidimensional linear governado pela equação de Laplace com condições de contorno de Dirichlet. Utilizou-se o Método das Diferenças Finitas (MDF), com esquema de aproximação de segunda ordem (CDS) para discretização do modelo matemático. Os suavizadores (solvers) utilizados foram os métodos Gauss-Seidel red-black e Jacobi ponderado. Para a obtenção da solução, foi empregado o método multigrid geométrico, com esquema de correção CS, restrição por ponderação completa, prolongação utilizando interpolação bilinear e número máximo de níveis para os diversos casos estudados. A paralelização do multigrid foi realizada aplicando-se uma metodologia, proposta neste trabalho, a cada uma de suas componentes algorítmicas: solver, processo de restrição, processo de prolongação e cálculo do resíduo. Os resultados podem ser considerados positivos, pois verificou-se que, além do tempo de CPU ter sido reduzido significativamente, este diminuiu à medida que o número de processadores utilizados aumentou

    Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    Full text link
    corecore