47 research outputs found

    Improving Performance of Transactional Applications through Adaptive Transactional Memory

    Get PDF
    With the rise of chip multiprocessors (CMPs), it is necessary to use parallel programming to exploit computational power of CMPs. Traditionally, lock-based mechanisms have been used to synchronize shared variables in parallel programs. However, with the complexity associated with locks, writing a correct parallel program is a huge burden for programmers. As an alternative, Transactional Memory (TM) is gaining momentum as a parallel programming model for multi--‐core processors. TM provides programmers with an atomic construct (transaction), which can be used to guarantee atomicity of accesses to shared variables, as the synchronization is handled through the underlying system. Transactional memory comes in two variants: Software transaction memory (STM) and Hardware transaction memory (HTM). Both STM and HTM systems have advantages and disadvantages that either enhance or penalize performance in transactional applications. In this thesis, the focus is on implementing an adaptive system that exploits both STM and HTM at transaction granularity. The goal is to achieve performance gain by incorporating the benefits of both TM systems. A synchronization technique is developed to seamlessly switch between HTM and STM based on the characteristics of a transaction. We exploit decision tree to predict the optimum system for each transaction in a given application. The decision tree is a form of supervised machine learning to classify transactions based on parameters such as transaction size, transaction write ratio, etc. From the evaluations using STAMP, NAS, and DiscoPoP benchmark suites, the proposed adaptive system is able to improve speed of transactional applications by 20.82% on average

    Techniques to improve concurrency in hardware transactional memory

    Get PDF
    Transactional Memory (TM) aims to make shared memory parallel programming easier by abstracting away the complexity of managing shared data. The programmer defines sections of code, called transactions, which the TM system guarantees that will execute atomically and in isolation from the rest of the system. The programmer is not required to implement such behaviour, as happens in traditional mutual exclusion techniques like locks - that responsibility is delegated to the underlying TM system. In addition, transactions can exploit parallelism that would not be available in mutual exclusion techniques; this is achieved by allowing optimistic execution assuming no other transaction operates concurrently on the same data. If that assumption is true the transaction commits its updates to shared memory by the end of its execution, otherwise, a conflict occurs and the TM system may abort one of the conflicting transactions to guarantee correctness; the aborted transaction would roll-back its local updates and be re-executed. Hardware and software implementations of TM have been studied in detail. However, large-scale adoption of software-only approaches have been hindered for long due to severe performance limitations. In this thesis, we focus on identifying and solving hardware transactional memory (HTM) issues in order to improve concurrency and scalability. Two key dimensions determine the HTM design space: conflict detection and speculative version management. The first determines how conflicts are detected between concurrent transactions and how to resolve them. The latter defines where transactional updates are stored and how the system deals with two versions of the same logical data. This thesis proposes a flexible mechanism that allows efficient storage and access to two versions of the same logical data, improving overall system performance and energy efficiency. Additionally, in this thesis we explore two solutions to reduce system contention - circumstances where transactions abort due to data dependencies - in order to improve concurrency of HTM systems. The first mechanism provides a suitable design to apply prefetching to speed-up transaction executions, lowering the window of time in which such transactions can experience contention. The second is an accurate abort prediction mechanism able to identify, before a transaction's execution, potential conflicts with running transactions. This mechanism uses past behaviour of transactions and locality in memory references to infer predictions, adapting to variations in workload characteristics. We demonstrate that this mechanism is able to manage contention efficiently in single-application and multi-application scenarios. Finally, this thesis also analyses initial real-world HTM protocols that recently appeared in market products. These protocols have been designed to be simple and easy to incorporate in existing chip-multiprocessors. However, this simplicity comes at the cost of severe performance degradation due to transient and persistent livelock conditions, potentially preventing forward progress. We show that existing techniques are unable to mitigate this degradation effectively. To deal with this issue we propose a set of techniques that retain the simplicity of the protocol while providing improved performance and forward progress guarantees in a wide variety of transactional workloads

    Optimization of Software Transactional Memory through Linear Regression and Decision Tree

    Get PDF
    Software Transactional Memory (STM) is a promising paradigm that facilitates programming for shared memory multiprocessors. In STM programs, synchronization of accesses to the shared memory locations is fully handled by STM library and does not require any intervention by programmers. While STM eases parallel programming, it results in run-time overhead which increases execution time of certain applications. In this thesis, we focus on overhead of STM and propose optimization techniques to enhance speed of STM applications. In particular, we focus on size of transaction, read-set, and write-set and show that execution time of applications significantly changes by varying these parameters. Optimizing these parameters manually is a time consuming process and requires significant labor work. We exploit Linear Regression (LR) and propose an optimization technique that decides on these parameters automatically. We further enhance this technique by using decision tree. The decision tree improves accuracy of predictions by selecting appropriate LR model for a given transaction. We evaluate our optimization techniques using a set of benchmarks from Stamp, NAS and DiscoPoP benchmark suites. Our experimental results reveal that LR and decision tree together are able to improve performance of STM programs up to 54.8%

    Dynamic Prediction based Scheduling for TM

    Get PDF
    Transactional memory (TM) provides an intuitive and simple way of writing parallel programs. TMs execute parallel programs speculatively and deliver better performance than conventional lock based parallel programs. However, in certain scenarios when an application lacks scope for parallelism, TMs are outperformed by conventional fine-grained locking. TM schedulers, which serialize transactions that face contention, have shown promise in improving performance of TMs in such scenarios. In this thesis, we develop a Dynamic Prediction based Scheduler (DPS) that exploits novel prediction techniques, like temporal locality and locality of access across repeated transactions. DPS predicts the access sets of future transactions based on the access patterns of the past transactions of the individual threads. We also propose a novel heuristic, called serialization affinity, which tends to serialize transactions with a probability proportional to the current amount of contention. Using the information of the currently executing transactions, the current amount of contention, and the predicted access sets, DPS dynamically serializes transactions to minimize conflicts. We implement DPS in two state-of-the-art STMs, SwissTM and TinySTM. Our results show that in scenarios where the number of threads is higher than the number of cores, DPS improves the performance of these STMs by up to 55% and 3000% respectively. On the other hand, the overhead of prediction techniques in DPS causes a performance degradation of just 5-8% in some cases, when the number of threads is less than the number of cores

    Performance Optimization Strategies for Transactional Memory Applications

    Get PDF
    This thesis presents tools for Transactional Memory (TM) applications that cover multiple TM systems (Software, Hardware, and hybrid TM) and use information of all different layers of the TM software stack. Therefore, this thesis addresses a number of challenges to extract static information, information about the run time behavior, and expert-level knowledge to develop these new methods and strategies for the optimization of TM applications

    A speculative execution approach to provide semantically aware contention management for concurrent systems

    Get PDF
    PhD ThesisMost modern platforms offer ample potention for parallel execution of concurrent programs yet concurrency control is required to exploit parallelism while maintaining program correctness. Pessimistic con- currency control featuring blocking synchronization and mutual ex- clusion, has given way to transactional memory, which allows the composition of concurrent code in a manner more intuitive for the application programmer. An important component in any transactional memory technique however is the policy for resolving conflicts on shared data, commonly referred to as the contention management policy. In this thesis, a Universal Construction is described which provides contention management for software transactional memory. The technique differs from existing approaches given that multiple execution paths are explored speculatively and in parallel. In the resolution of conflicts by state space exploration, we demonstrate that both concur- rent conflicts and semantic conflicts can be solved, promoting multi- threaded program progression. We de ne a model of computation called Many Systems, which defines the execution of concurrent threads as a state space management problem. An implementation is then presented based on concepts from the model, and we extend the implementation to incorporate nested transactions. Results are provided which compare the performance of our approach with an established contention management policy, under varying degrees of concurrent and semantic conflicts. Finally, we provide performance results from a number of search strategies, when nested transactions are introduced

    Especulação de threads usando arquiteturas de memória transacional em hardware

    Get PDF
    Orientadores: Guido Costa Souza de Araújo, José Nelson AmaralTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Especulação no nível de threads (TLS) é uma técnica em hardware/software que possibilita a execução paralela de múltiplas iterações de um laço, inclusive na presença de algumas dependências loop-carried. TLS exige mecanismos em hardware para auxiliar a detecção de conflitos, o armazenamento especulativo, os commits das transações em ordem, e o roll-back das transações. Trabalhos anteriores exploraram enfoques para implementar TLS, tanto em hardware dedicado como puramente em software, e tentaram predizer o desempenho de futuras implementações de TLS em hardware. Contudo, não existe nenhum processador comercial que forneça suporte direto para TLS. Entretanto, execução especulativa é suportada na forma de Memória Transacional em Hardware (HTM) ¿ disponível em processadores modernos como Intel Core e IBM POWER8. HTM implementa três características essenciais para TLS: detecção de conflitos, armazenamento especulativo, e roll-back de transações. Antes de aplicar TLS a um laço quente, é necessário determinar se o laço tem potencial para ser especulado. Um laço pode ser adequado para TLS se a probabilidade de dependências loop-carried em tempo de execução for baixa; para estimar esta probabilidade um perfilamento de dependências do laço deve ser usado. Este trabalho apresenta um verificador das dependências loop-carried integrado como uma nova extensão de OpenMP, a diretiva parallel for check, a qual pode ser usada para ajudar desenvolvedores a identificarem a existência destas dependências em construções parallel for. Este trabalho também apresenta uma análise detalhada da aplicação de HTM para a paralelização de laços com TLS e descreve uma avaliação cuidadosa da implementação de TLS usando HTMs disponíveis em processadores modernos. Como resultado, esta tese proporciona evidências para validar várias afirmações importantes sobre o desempenho de TLS nestas arquiteturas. Os resultados experimentais mostram que TLS usando HTM produz speedups de até 3.8× para alguns laços. Finalmente, este trabalho descreve uma nova técnica de especulação para a otimização, e execução simultânea, de múltiplos traços de regiões de código quente. Esta técnica, chamada Speculative Trace Optimization (STO), enumera, otimiza, e executa especulativamente traços de laços quentes. Isto requer o suporte em hardware disponível em sistemas HTM. Este trabalho discute as características necessárias para suportar STO: multi-versão, resolução de conflitos tardia, detecção de conflitos prematura, e sincronização das transações. Uma revisão das arquiteturas HTM existentes ¿ Intel TSX, IBM BG/Q, e IBM POWER8 ¿ mostra que nenhuma delas tem todas as características requeridas para implementar STO. Entretanto, este trabalho mostra que STO pode ser implementado nas arquiteturas HTM existentes através da adição de privatização e código para esperar/retomarAbstract: Thread-Level Speculation (TLS) is a hardware/software technique that enables the execution of multiple loop iterations in parallel, even in the presence of some loop-carried dependences. TLS requires hardware mechanisms to support conflict detection, speculative storage, in-order commit of transactions, and transaction roll-back. Prior research has investigated approaches to implement TLS, either on dedicated hardware or purely in software, and has attempted to predict the performance of future TLS hardware implementations. Nevertheless, there is no off-the-shelf processor that provides direct support for TLS. Speculative execution is supported, however, in the form of Hardware Transactional Memory (HTM) ¿ available in recent processors such as the Intel Core and the IBM POWER8. HTM implements three key features required by TLS: conflict detection, speculative storage, and transaction roll-back. Before applying TLS to a hot loop, it is necessary to determine if the loop has potential to be amenable. A loop could be amenable if the probability of loop-carried dependences at runtime is low; to measure this probability loop dependence profiling is used. This project presents a novel dynamic loop-carried dependence checker integrated as a new extension to OpenMP, the parallel for check construct, which can be used to help programmers identify the existence of loop-carried dependences in parallel for constructs. This work also presents a detailed analysis of the application of HTM support for loop parallelization with TLS and describes a careful evaluation of the implementation of TLS on the HTM extensions available in such machines. As a result, it provides evidence to support several important claims about the performance of TLS over HTM in the Intel Core and the IBM POWER8 architectures. Experimental results reveal that by implementing TLS on top of HTM, speed-ups of up to 3.8× can be obtained for some loops. Finally, this work describes a novel speculation technique for the optimization, and simultaneous execution, of multiple alternative traces of hot code regions. This technique, called Speculative Trace Optimization (STO), enumerates, optimizes, and speculatively executes traces of hot loops. It requires hardware support that can be provided in a similar fashion as that available in HTM systems. This work discusses the necessary features to support STO, namely multi-versioning, lazy conflict resolution, eager conflict detection, and transaction synchronization. A review of existing HTM architectures ¿ Intel TSX, IBM BG/Q, and IBM POWER8 ¿ shows that none of them has all the features required to implement STO. However, this work demonstrates that STO can be implemented on top of existing HTM architectures through the addition of privatization and wait/resume codeDoutoradoCiência da ComputaçãoDoutor em Ciência da ComputaçãoCAPESFAPESPCNP

    Measurement, Modeling, and Characterization for Power-Aware Computing

    Get PDF
    Society’s increasing dependence on information technology has resulted in the deployment of vast compute resources. The energy costs of operating these resources coupled with environmental concerns have made power-aware computingone of the primary challenges for the IT sector. Making energy-efficient computing a rule rather than an exception requires that researchers and system designers use the right set of techniques and tools. These involve measuring,modeling, and characterizing the energy consumption of computers at varying degrees of granularity.In this thesis, we present techniques to measure power consumption of computer systems at various levels. We compare them for accuracy and sensitivityand discuss their effectiveness. We test Intel’s hardware power model for estimation accuracy and show that it is fairly accurate for estimating energy consumption when sampled at the temporal granularity of more than tens ofmilliseconds.We present a methodology to estimate per-core processor power consumption using performance counter and temperature-based power modeling and validate it across multiple platforms. We show our model exhibits negligible computationoverhead, and the median estimation errors ranges from 0.3% to 10.1% for applications from SPEC2006, SPEC-OMP and NAS benchmarks. We test the usefulness of the model in a meta-scheduler to enforce power constraint on a system.Finally, we perform a detailed performance and energy characterization of Intel’s Restricted Transactional Memory (RTM). We use TinySTM software transactional memory (STM) system to benchmark RTM’s performance against competing STM alternatives. We use microbenchmarks and STAMP benchmarksuite to compare RTM versus STM performance and energy behavior. We quantify the RTM hardware limitations that affect its success rate. We show that RTM performs better than TinySTM when working-set fits inside the cache and that RTM is better at handling high contention workloads

    New hardware support transactional memory and parallel debugging in multicore processors

    Get PDF
    This thesis contributes to the area of hardware support for parallel programming by introducing new hardware elements in multicore processors, with the aim of improving the performance and optimize new tools, abstractions and applications related with parallel programming, such as transactional memory and data race detectors. Specifically, we configure a hardware transactional memory system with signatures as part of the hardware support, and we develop a new hardware filter for reducing the signature size. We also develop the first hardware asymmetric data race detector (which is also able to tolerate them), based also in hardware signatures. Finally, we propose a new module of hardware signatures that solves some of the problems that we found in the previous tools related with the lack of flexibility in hardware signatures

    Measurement, Modeling, and Characterization for Energy-Efficient Computing

    Get PDF
    The ever-increasing ecological footprint of Information Technology (IT) sector coupled with adverse effects of high power consumption on electronic circuits has increased the significance of energy-efficient computing in the last decade. Making energy-efficient computing a norm rather than an exception requires that system designers and programmers understand the energy implications of their design and implementation choices. This necessitates a detailed view of system’s energy expenditure and/or power consumption. We explore this aspect of energy-efficient computing in this thesis through power measurement, power modeling, and energy characterization.First, we present a quantitative comparison between power measurement data collected for computer systems using four techniques: a power meter at wall outlet, currenttransducers at ATX power rails, CPU voltage regulator’s current monitor, and Intel’s proprietary RAPL (Running Average Power Limit) interface. We compare them for accuracy, sensitivity and accessibility.Second, we present two different methodologies to model processor power consumption. The first model estimates power consumption at the granularity of individualcores using per-core performance events and temperature sensors. We validate the methodology on six different platforms and show that our model estimates power consumption with high accuracy across all platforms consistently. To understand the energy expenditure trends across different frequencies and different degrees of parallelism, we need to model power at a much finer granularity. The second power model addresses this issue by estimating static and dynamic power consumption for individual cores and the uncore. We validate this model on Intel’s Haswell platform for single-threaded and multi-threaded benchmarks. We use this power model to characterize energy efficiency of frequency scaling on Haswell microarchitecture and use the insights to implementa low overhead DVFS scheduler. We also characterize the energy efficiency of thread scaling using the power model and demonstrate how different communication parametersand microarchitectural traits affect application’s energy when it scales.Finally, we perform detailed performance and energy characterization of Intel’s RestrictedTransactional Memory (RTM).We use TinySTM software transactional memory(STM) system to benchmark RTM’s performance against competing STM alternatives.We use microbenchmarks and STAMP benchmark suite to compare RTM an STM performanceand energy behavior. We quantify the RTM hardware limitations and identifyconditions required for RTM to outperform STM
    corecore