4 research outputs found

    Computational Methods in Science and Engineering : Proceedings of the Workshop SimLabs@KIT, November 29 - 30, 2010, Karlsruhe, Germany

    Get PDF
    In this proceedings volume we provide a compilation of article contributions equally covering applications from different research fields and ranging from capacity up to capability computing. Besides classical computing aspects such as parallelization, the focus of these proceedings is on multi-scale approaches and methods for tackling algorithm and data complexity. Also practical aspects regarding the usage of the HPC infrastructure and available tools and software at the SCC are presented

    Optimierung der Energie und Power getriebenen Architekturexploration fĂĽr Multicore und heterogenes System on Chip

    Get PDF
    The contribution of this work builds on top of the established virtual prototype platforms to improve both SoC design quality and productivity. Initially, an automatic system-level power estimation framework was developed to address the critical issue of early power estimation in SoC design. The estimation framework models the static and dynamic power consumption of the hardware components. These models are created from the normalized values of the basic design components of SoC, obtained through one-time power simulation of RTL hardware models. The framework allows dynamic technology node reconfiguration for power estimation models. Its instantaneous power reporting aids the detection of possible hotspot early into the design process. Adding this additional data in conjunction with a steadily growing design space of complex heterogeneous SoC, finding the right parameter configuration is a challenging and laborious task for a system-level designer. This work addresses this bottleneck by optimizing the design space exploration (DSE) process for MPSoC design. An automatic DSE framework for virtual platforms (VPs) was developed which is flexible and allows the selection optimal parameter configuration without pre-existing knowledge. To reduce exploration time, the framework is equipped with several multi-objective optimization techniques based on simulated annealing and a genetic algorithm. Lastly, to aid HW/SW partitioning at system-level, a flexible and automated workflow (SW2TLM) is presented. It allows the designer to explore various possible partitioning scenarios without going into depth of the hardware architecture complexity and software integration. The framework generates system-level hardware accelerators from corresponding functionality encoded in the software code and integrates them into the VP. Power consumption and time speedups of acceleration is reported to the designer, which further increases the quality and productivity of the development process towards the final architecture. The presented tools are evaluated using a state-of-the-art VP for a range of single and multi-core applications. Viewing the energy delay product, a reduction in exploration time was recorded at approximately 62% (worst case), maintaining optimal parameter accuracy of 90% compared to previous techniques. While the SW2TLM further increases the exploration versatility by combining modern high-level synthesis with system-level architectural exploration.Der Beitrag dieser Arbeit baut auf dem etablierten Konzept der virtuellen Prototyp (VP) Plattformen auf, um die Qualität und die Produktivität des Entwurfsprozesses zu verbessern. Zunächst wurde ein automatisches System-Level-Framework entwickelt, um Verlustleistungsabschätzung für SoC-Designs in einer deutlich früheren Entwicklungsphase zu ermöglichen. Hierfür werden statischen und dynamischen Energieverbrauchsanteile individueller Hardwareelemente durch ein abstraktes Modell ausgedrückt. Das Framework ermöglicht eine dynamische Anpassung des Technologieknotens sowie die Integration neuer Leistungsmodelle für Drittanbieterkomponenten. Die kontinuierliche Erfassung der Energieverbrauchseigenschaften und ihre grafische Darstellung Benutzeroberfläche unterstützt zusätzlich die frühzeitige Identifikation möglicher Hotspots. Durch die Bereitstellung zusätzlicher Daten, in Verbindung mit einem stetig wachsenden Entwurfsraum komplexer SoCs, ist die Identifikation der richtigen Parameterkonfiguration eine zeitintensive Aufgabe. Die vorgelegten Konzepte erlauben eine gesteigerte Automatisierung des Explorationsprozesses. Techniken der mehrdimensionalen Optimierung, basierend auf Simulated Annealing und genetischer Algorithmen erlauben die Identifikation von geeigneten Konfigurationen ohne vorheriges Wissen oder Erfahrungswerte Schließlich wurde zur Unterstützung der HW/SW -Partitionierung auf System-Ebene ein flexibler und automatisierter Workflow entwickelt. Er ermöglicht es dem Designer verschiedene mögliche Partitionierungsszenarien zu untersuchen, ohne sich in die Komplexität der Hardwarearchitektur und der Softwareintegration zu vertiefen. Das Framework erzeugt abstrakte Beschleunigermodelle aus entsprechenden Softwarefunktionen und integriert sie nahtlos in den ausführbare VP. Detaillierte Daten zum Energieverbrauch, Beschleunigungsfaktor und Kommunikationsoverhead der Partitionierung werden erfasst und dem Designer zur Verfügung gestellt, was die Qualität und Produktivität des weiter erhöht. Die vorgestellten Tools werden mit einer modernen VP für verschiedene SW-Anwendungen evaluiert. Bei Betrachtung des Energieverzögerungsprodukts wurde eine Verringerung der Explorationszeit um mehr als 62% bei 90% Parametergenauigkeit festgestell. Darauf aufbauend, erleichtert die automatisierte Untersuchung verschiedener HW/SW Partitionierungen die Entwicklung heterogener Architekturen durch die Kombination moderner HLS mit Architektur-Exploration auf der Systemebene

    Distributed learning and inference in deep models

    Get PDF
    In recent years, the size of deep learning problems has been increased significantly, both in terms of the number of available training samples as well as the number of parameters and complexity of the model. In this thesis, we considered the challenges encountered in training and inference of large deep models, especially on nodes with limited computational power and capacity. We studied two classes of related problems; 1) distributed training of deep models, and 2) compression and restructuring of deep models for efficient distributed and parallel execution to reduce inference times. Especially, we considered the communication bottleneck in distributed training and inference of deep models. Data compression is a viable tool to mitigate the communication bottleneck in distributed deep learning. However, the existing methods suffer from a few drawbacks, such as the increased variance of stochastic gradients (SG), slower convergence rate, or added bias to SG. In my Ph.D. research, we have addressed these challenges from three different perspectives: 1) Information Theory and the CEO Problem, 2) Indirect SG compression via Matrix Factorization, and 3) Quantized Compressive Sampling. We showed, both theoretically and via simulations, that our proposed methods can achieve smaller MSE than other unbiased compression methods with fewer communication bit-rates, resulting in superior convergence rates. Next, we considered federated learning over wireless multiple access channels (MAC). Efficient communication requires the communication algorithm to satisfy the constraints imposed by the nodes in the network and the communication medium. To satisfy these constraints and take advantage of the over-the-air computation inherent in MAC, we proposed a framework based on random linear coding and developed efficient power management and channel usage techniques to manage the trade-offs between power consumption and communication bit-rate. In the second part of this thesis, we considered the distributed parallel implementation of an already-trained deep model on multiple workers. Since latency due to the synchronization and data transfer among workers adversely affects the performance of the parallel implementation, it is desirable to have minimum interdependency among parallel sub-models on the workers. To achieve this goal, we developed and analyzed RePurpose, an efficient algorithm to rearrange the neurons in the neural network and partition them (without changing the general topology of the neural network) such that the interdependency among sub-models is minimized under the computations and communications constraints of the workers.Ph.D
    corecore