208 research outputs found

    Tracing and profiling machine learning dataflow applications on GPU

    Get PDF
    In this paper, we propose a profiling and tracing method for dataflow applications with GPU acceleration. Dataflow models can be represented by graphs and are widely used in many domains like signal processing or machine learning. Within the graph, the data flows along the edges, and the nodes correspond to the computing units that process the data. To accelerate the execution, some co-processing units, like GPUs, are often used for computing intensive nodes. The work in this paper aims at providing useful information about the execution of the dataflow graph on the available hardware, in order to understand and possibly improve the performance. The collected traces include low-level information about the CPU, from the Linux Kernel (system calls), as well as mid-level and high-level information respectively about intermediate libraries like CUDA, HIP or HSA, and the dataflow model. This is followed by post-mortem analysis and visualization steps in order to enhance the trace and show useful information to the user. To demonstrate the effectiveness of the method, it was evaluated for TensorFlow, a well-known machine learning library that uses a dataflow computational graph to represent the algorithms. We present a few examples of machine learning applications that can be optimized with the help of the information provided by our proposed method. For example, we reduce the execution time of a face recognition application by a factor of 5X. We suggest a better placement of the computation nodes on the available hardware components for a distributed application. Finally, we also enhance the memory management of an application to speed up the execution

    Traçage et profilage d'applications d'apprentissage automatique de type flot de données utilisant un processeur graphique

    Get PDF
    Actuellement, les besoins en puissance de calcul sont de plus en plus importants, alors que les améliorations au niveau du matériel commencent à ralentir. La puissance des processeurs et notamment leur fréquence de fonctionnement stagnent pour des raisons physiques comme la finesse de gravure ou la dissipation de chaleur. Afin de surpasser ces limites, le calcul en parallèle semble être une solution prometteuse avec l’utilisation d’architectures hétérogènes. Ces dernières mettent en oeuvre une combinaison de plusieurs unités de calculs de types possiblement différents, ce qui leur permet d’offrir un fonctionnement hautement parallèle. Malgré tout, utiliser l’ensemble du matériel efficacement reste difficile, et la programmation au niveau logiciel de ces architectures pose problème. Par conséquent, différents modèles ont émergé avec notamment les approches flot de données. Ces dernières proposent des caractéristiques très adaptées pour ce genre de contexte parallèle. Elles permettent de programmer plus facilement les différentes unités de calcul afin de bénéficier au maximum du matériel disponible. Dans un contexte de recherche de performance optimale, il est essentiel d’avoir des outils permettant de diagnostiquer d’éventuels problèmes. Quelques solutions ont déjà pu démontrer leur efficacité dans le cas d’un modèle de programmation plus traditionnel et séquentiel, utilisant ou non un processeur graphique. On retrouve par exemple des outils comme LTTng ou Ftrace destinés à l’analyse du processeur central. Concernant les processeurs graphiques, les outils propriétaires et à sources fermées, proposés par les constructeurs sont en général les plus complets et privilégiés par les programmeurs. Cela présente toutefois une limite, puisque les solutions ne sont pas générales et restent dépendantes du matériel proposé par un constructeur. Par ailleurs, elles offrent une flexibilité limitée avec des visualisations et analyses définies et fixes qui ne peuvent ni être modifiées ni améliorées en fonction des besoins d’un utilisateur. Finalement, aucun outil existant ne s’intéresse spécifiquement aux modèles flot de données.----------ABSTRACT: Recently, increasing computing capabilities have been required in various areas like scientific computing, video games and graphical rendering or artificial intelligence. These domains usually involve the processing of a large amount of data, intended to be performed as fast as possible. Unfortunately, hardware improvements have recently slowed down. The CPU clock speed, for example, is not increasing much any more, possibly nearing technological limits. Physical constraints like the heat dissipation or fine etching are the main reasons for that. Consequently, new opportunities like parallel processing using heterogeneous architectures became popular. In this context, the traditional processors get support from other computing units like graphical processors. In order to program these, the dataflow model offers several advantages. It is inherently parallel and thus well adapted. In this context, guaranteeing optimal performances is another main concern. For that, tracing and profiling central processing and graphical processing units are two useful techniques that can be considered. Several tools exist, like LTTng and FTrace that can trace the operating system and focus on the central processor. In addition, proprietary tools offered by hardware vendors can help to analyze and monitor the graphical processor. However, these tools are specific to one type of hardware and lack flexibility. Moreover, none of them target in particular dataflow applications executed on a heterogeneous platform

    Modernizing Parallel Code with Pattern Analysis

    Get PDF

    tf-Darshan: Understanding Fine-grained I/O Performance in Machine Learning Workloads

    Full text link
    Machine Learning applications on HPC systems have been gaining popularity in recent years. The upcoming large scale systems will offer tremendous parallelism for training through GPUs. However, another heavy aspect of Machine Learning is I/O, and this can potentially be a performance bottleneck. TensorFlow, one of the most popular Deep-Learning platforms, now offers a new profiler interface and allows instrumentation of TensorFlow operations. However, the current profiler only enables analysis at the TensorFlow platform level and does not provide system-level information. In this paper, we extend TensorFlow Profiler and introduce tf-Darshan, both a profiler and tracer, that performs instrumentation through Darshan. We use the same Darshan shared instrumentation library and implement a runtime attachment without using a system preload. We can extract Darshan profiling data structures during TensorFlow execution to enable analysis through the TensorFlow profiler. We visualize the performance results through TensorBoard, the web-based TensorFlow visualization tool. At the same time, we do not alter Darshan's existing implementation. We illustrate tf-Darshan by performing two case studies on ImageNet image and Malware classification. We show that by guiding optimization using data from tf-Darshan, we increase POSIX I/O bandwidth by up to 19% by selecting data for staging on fast tier storage. We also show that Darshan has the potential of being used as a runtime library for profiling and providing information for future optimization.Comment: Accepted for publication at the 2020 International Conference on Cluster Computing (CLUSTER 2020

    Scaling a convolutional neural network based Flower counting application in a distributed GPU cluster

    Get PDF
    Taking advantage of modern data acquisition techniques, the researchers of P2IRC located at the University of Saskatchewan developed an application to monitor the status of the flower growth during different phases of the blooming period and the yield prediction of canola crops. Though the application could predict the near accurate number of flowers in a few scenarios, its inability to function under challenging situations such as misinterpreting sun reflection or dust along the roadside as flowers have motivated the researchers to find an alternative approach of counting flowers. In addition to being a more accurate version, another goal is for the new application to be faster to infer the number of flowers and scalable in distributed environments. Putting these goals in mind, in this thesis, a Convolutional neural network (CNN) based flower counting application is developed and evaluated taking inspiration from two other previous works where CNN was used for counting heads in dense crowds and predicting the number of bacterial cells from medical imagery. In addition to that, the application addresses the performance and the accuracy goals previously mentioned. Two challenges of using the neural network are (a) the training needs a large volume of data to converge to a low error and (b) the training is computationally expensive and it takes longer time to complete. To address the first challenge, experiments were run with both "ground truth" estimated using a modified version of the previous flower counter, and ground truth from manual annotation. To address the problem of long training time, two distributed versions of the proposed application were created based on two different distributed architectures called Parameter Server and Ring-AllReduce. Moreover, a detailed explanation of the proposed CNN's architecture along with its memory footprints and GPU utilization is also organized as an in-depth case study to help trace the model's memory consumption during training. From different sets of experiments, the new flower counter application is observed more accurate than its previous version and both implementations of its distributed versions successfully reduced the total completion time as a result of being linearly scalable when more workers are added to run the training. The Ring-AllReduce version performed slightly better than the Parameter Server, but the differences were not substantial

    A Fortran Kernel Generation Framework for Scientific Legacy Code

    Get PDF
    Quality assurance procedure is very important for software development. The complexity of modules and structure in software impedes the testing procedure and further development. For complex and poorly designed scientific software, module developers and software testers need to put a lot of extra efforts to monitor not related modules\u27 impacts and to test the whole system\u27s constraints. In addition, widely used benchmarks cannot help programmers with accurate and program specific system performance evaluation. In this situation, the generated kernels could provide considerable insight into better performance tuning. Therefore, in order to greatly improve the productivity of various scientific software engineering tasks such as performance tuning, debugging, and verification of simulation results, we developed an automatic compute kernel extraction prototype platform for complex legacy scientific code. In addition, considering that scientific research and experiment require long-term simulation procedure and the huge size of data transfer, we apply message passing based parallelization and I/O behavior optimization to highly improve the performance of the kernel extractor framework and then use profiling tools to give guidance for parallel distribution. Abnormal event detection is another important aspect for scientific research; dealing with huge observational datasets combined with simulation results it becomes not only essential but also extremely difficult. In this dissertation, for the sake of detecting high frequency event and low frequency events, we reconfigured this framework equipped with in-situ data transfer infrastructure. Through the method of combining signal processing data preprocess(decimation) with machine learning detection model to train the stream data, our framework can significantly decrease the amount of transferred data demand for concurrent data analysis (between distributed computing CPU/GPU nodes). Finally, the dissertation presents the implementation of the framework and a case study of the ACME Land Model (ALM) for demonstration. It turns out that the generated compute kernel with lower cost can be used in performance tuning experiments and quality assurance, which include debugging legacy code, verification of simulation results through single point and multiple points of variables tracking, collaborating with compiler vendors, and generating custom benchmark tests

    Throughput Prediction of Asynchronous SGD in TensorFlow

    Full text link
    Modern machine learning frameworks can train neural networks using multiple nodes in parallel, each computing parameter updates with stochastic gradient descent (SGD) and sharing them asynchronously through a central parameter server. Due to communication overhead and bottlenecks, the total throughput of SGD updates in a cluster scales sublinearly, saturating as the number of nodes increases. In this paper, we present a solution to predicting training throughput from profiling traces collected from a single-node configuration. Our approach is able to model the interaction of multiple nodes and the scheduling of concurrent transmissions between the parameter server and each node. By accounting for the dependencies between received parts and pending computations, we predict overlaps between computation and communication and generate synthetic execution traces for configurations with multiple nodes. We validate our approach on TensorFlow training jobs for popular image classification neural networks, on AWS and on our in-house cluster, using nodes equipped with GPUs or only with CPUs. We also investigate the effects of data transmission policies used in TensorFlow and the accuracy of our approach when combined with optimizations of the transmission schedule

    Computation scheduling in neural network inference on embedded hardware

    Get PDF
    Cílem této práce je prozkoumat state- of-the-art způsoby detekce objektů po- mocí konvolučních neuronových sítí, využívaných v oblasti autonomního řízení. Proto aby běh na vestavěných systémech byl dostatečně optimalizo- ván, je nutné rozumět struktuře sítě a způsobu, jak se provádí její výpočet pomocí konkrétní knihovny. Hlavním cílem této práce je porovnat něko- lik dostupných knihoven pro oblast strojového učení a popsat nezdokumen- tovanou vnitřní architekturu knihovny TensorFlow, aby bylo možné na základě těchto znalostí upravovat vykonávané části kódu za účelem lepšího rozvrho- vání jednotlivých procesů. Aby bylo možné porovnávat výsledky budoucích optimalizací na cílové platformě NVI- DIA Jetson Tegra X2, je představen jednoduchý benchmark a je popsán postup, jak vyčítat spotřebu energie a tepelný profil čipů na desce.This thesis aims to examine the state-of-the-art solution of using con- volutional neural networks to address the problem of object detection, during the autonomous driving. The effective execution of these solutions involves an in-depth understanding of used frame- work architectures. The main goal of the thesis is to compare several ma- chine learning frameworks and provide a comprehensive description of the nondocumented internal architecture of the TensorFlow machine learning framework to allow future researches to introduce modifications regarding scheduling mechanisms. To properly evaluate future modifications on the target platform NVIDIA Tegra X2, the thesis introduces the benchmark and provides an instruction how to read power consumption and temperature of board components
    corecore