15 research outputs found

    Experiences with Resource Provisioning for Scientific Workflows Using Corral

    Get PDF

    Parallel Shortest Path Algorithms for Solving Large-Scale Instances

    Get PDF
    We present an experimental study of parallel algorithms for solving the single source shortest path problem with non-negative edge weights (NSSP) on large-scale graphs. We implement Meyer and Sander's Δ-stepping algorithm and report performance results on the Cray MTA-2, a multithreaded parallel architecture. The MTA-2 is a high-end shared memory system offering two unique features that aid the efficient implementation of irregular parallel graph algorithms: the ability to exploit fine-grained parallelism, and low-overhead synchronization primitives. Our implementation exhibits remarkable parallel speedup when compared with a competitive sequential algorithm, for low-diameter sparse graphs. For instance, Δ-stepping on a directed scale-free graph of 100 million vertices and 1 billion edges takes less than ten seconds on 40 processors of the MTA-2, with a relative speedup of close to 30. To our knowledge, these are the first performance results of a parallel NSSP problem on realistic graph instances in the order of billions of vertices and edges

    Performance Improvement of Multithreaded Java Applications Execution on Multiprocessor Systems

    Get PDF
    El disseny del llenguatge Java, que inclou aspectes importants com són la seva portabilitat i neutralitat envers l'arquitectura, les seves capacitats multithreading, la seva familiaritat (degut a la seva semblança amb C/C++), la seva robustesa, les seves capacitats en seguretat i la seva naturalesa distribuïda, fan que sigui un llenguatge potencialment interessant per ser utilitzat en entorns paral·lels com són els entorns de computació d'altes prestacions (HPC), on les aplicacions poden treure profit del suport que ofereix Java a l'execució multithreaded per realitzar càlculs en paral·lel, o en entorns e-business, on els servidors Java multithreaded (que segueixen l'especificació J2EE) poden treure profit de les capacitats multithreading de Java per atendre de manera concurrent un gran nombre de peticions.No obstant, l'ús de Java per la programació paral·lela ha d'enfrontar-se a una sèrie de problemes que fàcilment poden neutralitzar el guany obtingut amb l'execució en paral·lel. El primer problema és el gran overhead provocat pel suport de threads de la JVM quan s'utilitzen threads per executar feina de gra fi, quan es crea un gran nombre de threads per suportar l'execució d'una aplicació o quan els threads interaccionen estretament mitjançant mecanismes de sincronització. El segon problema és la degradació en el rendiment produïda quan aquestes aplicacions multithreaded s'executen en sistemes paral·lels multiprogramats. La principal causa d'aquest problemes és la manca de comunicació entre l'entorn d'execució i les aplicacions, la qual pot induir a les aplicacions a fer un ús descoordinat dels recursos disponibles.Aquesta tesi contribueix amb la definició d'un entorn per analitzar i comprendre el comportament de les aplicacions Java multithreaded. La contribució principal d'aquest entorn és que la informació de tots els nivells involucrats en l'execució (aplicació, servidor d'aplicacions, JVM i sistema operatiu) està correlada. Aquest fet és molt important per entendre com aquest tipus d'aplicacions es comporten quan s'executen en entorns que inclouen servidors i màquines virtuals, donat que l'origen dels problemes de rendiment es pot trobar en qualsevol d'aquests nivells o en la seva interacció.Addicionalment, i basat en el coneixement adquirit mitjançant l'entorn d'anàlisis proposat, aquesta tesi contribueix amb mecanismes i polítiques de planificació orientats cap a l'execució eficient d'aplicacions Java multithreaded en sistemes multiprocessador considerant les interaccions i la coordinació dels mecanismes i les polítiques de planificació en els diferents nivells involucrats en l'execució. La idea bàsica consisteix en permetre la cooperació entre les aplicacions i l'entorn d'execució en la gestió de recursos establint una comunicació bi-direccional entre les aplicacions i el sistema. Per una banda, les aplicacions demanen a l'entorn d'execució la quantitat de recursos que necessiten. Per altra banda, l'entorn d'execució pot ser inquirit en qualsevol moment per les aplicacions ser informades sobre la seva assignació de recursos. Aquesta tesi proposa que les aplicacions utilitzin la informació proporcionada per l'entorn d'execució per adaptar el seu comportament a la quantitat de recursos que tenen assignats (aplicacions auto-adaptables). Aquesta adaptació s'assoleix en aquesta tesi per entorns HPC per mitjà de la mal·leabilitat de les aplicacions, i per entorns e-business amb una proposta de control de congestió que fa control d'admissió basat en la diferenciació de connexions SSL per prevenir la degradació del rendiment i mantenir la Qualitat de Servei (QoS).Els resultats de l'avaluació demostren que subministrar recursos de manera dinàmica a les aplicacions auto-adaptables en funció de la seva demanda millora el rendiment de les aplicacions Java multithreaded tant en entorns HPC com en entorns e-business. Mentre disposar d'aplicacions auto-adaptables evita la degradació del rendiment, el subministrament dinàmic de recursos permet satisfer els requeriments de les aplicacions en funció de la seva demanda i adaptar-se a la variabilitat de les seves necessitats de recursos. D'aquesta manera s'aconsegueix una millor utilització dels recursos donat que els recursos que no utilitza una aplicació determinada poden ser distribuïts entre les altres aplicacions.The design of the Java language, which includes important aspects such as its portability and architecture neutrality, its multithreading facilities, its familiarity (due to its resemblance with C/C++), its robustness, its security capabilities and its distributed nature, makes it a potentially interesting language to be used in parallel environments such as high performance computing (HPC) environments, where applications can benefit from the Java multithreading support for performing parallel calculations, or e-business environments, where multithreaded Java application servers (i.e. following the J2EE specification) can take profit of Java multithreading facilities to handle concurrently a large number of requests.However, the use of Java for parallel programming has to face a number of problems that can easily offset the gain due to parallel execution. The first problem is the large overhead incurred by the threading support available in the JVM when threads are used to execute fine-grained work, when a large number of threads are created to support the execution of the application or when threads closely interact through synchronization mechanisms. The second problem is the performance degradation occurred when these multithreaded applications are executed in multiprogrammed parallel systems. The main issue that causes these problems is the lack of communication between the execution environment and the applications, which can cause these applications to make an uncoordinated use of the available resources.This thesis contributes with the definition of an environment to analyze and understand the behavior of multithreaded Java applications. The main contribution of this environment is that all levels in the execution (application, application server, JVM and operating system) are correlated. This is very important to understand how this kind of applications behaves when executed on environments that include servers and virtual machines, because the origin of performance problems can reside in any of these levels or in their interaction.In addition, and based on the understanding gathered using the proposed analysis environment, this thesis contributes with scheduling mechanisms and policies oriented towards the efficient execution of multithreaded Java applications on multiprocessor systems considering the interactions and coordination between scheduling mechanisms and policies at the different levels involved in the execution. The basis idea consists of allowing the cooperation between the applications and the execution environment in the resource management by establishing a bi-directional communication path between the applications and the underlying system. On one side, the applications request to the execution environment the amount of resources they need. On the other side, the execution environment can be requested at any time by the applications to inform them about their resource assignments. This thesis proposes that applications use the information provided by the execution environment to adapt their behavior to the amount of resources allocated to them (self-adaptive applications). This adaptation is accomplished in this thesis for HPC environments through the malleability of the applications, and for e-business environments with an overload control approach that performs admission control based on SSL connections differentiation for preventing throughput degradation and maintaining Quality of Service (QoS).The evaluation results demonstrate that providing resources dynamically to self-adaptive applications on demand improves the performance of multithreaded Java applications as in HPC environments as in e-business environments. While having self-adaptive applications avoids performance degradation, dynamic provision of resources allows meeting the requirements of the applications on demand and adapting to their changing resource needs. In this way, better resource utilization is achieved because the resources not used by some application may be distributed among other applications

    Design and evaluation of a Thread-Level Speculation runtime library

    Get PDF
    En los próximos años es más que probable que máquinas con cientos o incluso miles de procesadores sean algo habitual. Para aprovechar estas máquinas, y debido a la dificultad de programar de forma paralela, sería deseable disponer de sistemas de compilación o ejecución que extraigan todo el paralelismo posible de las aplicaciones existentes. Así en los últimos tiempos se han propuesto multitud de técnicas paralelas. Sin embargo, la mayoría de ellas se centran en códigos simples, es decir, sin dependencias entre sus instrucciones. La paralelización especulativa surge como una solución para estos códigos complejos, posibilitando la ejecución de cualquier tipo de códigos, con o sin dependencias. Esta técnica asume de forma optimista que la ejecución paralela de cualquier tipo de código no de lugar a errores y, por lo tanto, necesitan de un mecanismo que detecte cualquier tipo de colisión. Para ello, constan de un monitor responsable que comprueba constantemente que la ejecución no sea errónea, asegurando que los resultados obtenidos de forma paralela sean similares a los de cualquier ejecución secuencial. En caso de que la ejecución fuese errónea los threads se detendrían y reiniciarían su ejecución para asegurar que la ejecución sigue la semántica secuencial. Nuestra contribución en este campo incluye (1) una nueva librería de ejecución especulativa fácil de utilizar; (2) nuevas propuestas que permiten reducir de forma significativa el número de accesos requeridos en las peraciones especulativas, así como consejos para reducir la memoria a utilizar; (3) propuestas para mejorar los métodos de scheduling centradas en la gestión dinámica de los bloques de iteraciones utilizados en las ejecuciones especulativas; (4) una solución híbrida que utiliza memoria transaccional para implementar las secciones críticas de una librería de paralelización especulativa; y (5) un análisis de las técnicas especulativas en uno de los dispositivos más vanguardistas del momento, los coprocesadores Intel Xeon Phi. Como hemos podido comprobar, la paralelización especulativa es un campo de investigación activo. Nuestros resultados demuestran que esta técnica permite obtener mejoras de rendimiento en un gran número de aplicaciones. Así, esperamos que este trabajo contribuya a facilitar el uso de soluciones especulativas en compiladores comerciales y/o modelos de programación paralela de memoria compartida.Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos

    GPUの性能分析モデリンクと通信レイテンシの隠蔽に基づく性能の最適化方法に関する研究

    Get PDF
    学位の種別:課程博士University of Tokyo(東京大学

    Heterogeneity-awareness in multithreaded multicore processors

    Get PDF
    During the last decades, Computer Architecture has experienced a great series of revolutionary changes. The increasing transistor count on a single chip has led to some of the main milestones in the field, from the release of the first Superscalar (1965) to the state-of-the-art Multithreaded Multicore Architectures, like the Intel Core i7 (2009).Moore's Law has continued for almost half of a century and is not expected to stop for at least another decade, and perhaps much longer. Moore observed a trend in the process technology advances. So, the number of transistors that can be placed inexpensively on an integrated circuit has increased exponentially, doubling approximately every two years. Nevertheless, having more available transistors can not be always directly translated into having more performance.The complexity of state-of-the-art software has reached heights unthinkable in prior ages, both in terms of the amount of computation and the complexity involved. If we deeply analyze this complexity in software we would realize that software is comprised of smaller execution processes that, although maintaining certain spatial/temporal locality, imply an inherently heterogeneous behavior. That is, during execution time the hardware executes very different portions of software, with huge differences in terms of behavior and hardware requirements. This heterogeneity in the behaviour of the software is not specific of the latest videogame, but it is inherent to software programming itself, since the very beginning of Algorithmics.In this PhD dissertation we deeply analyze the inherent heterogeneity present in software behavior. We identify the main issues and sources of this heterogeneity, that hamper most of the state-of-the-art processor designs from obtaining their maximum potential. Hence, the heterogeneity in software turns most of the current processors, commonly called general-purpose processors, into overdesigned. That is, they have much more hardware resources than really needed to execute the software running on them. This fact would not represent a main problem if we were not concerned on the additional power consumption involved in software computation.The final goal of this PhD dissertation consists in assigning each portion of software exactly the amount of hardware resources really needed to fully exploit its maximal potential; without consuming more energy than the strictly needed. That is, obtaining complexity-effective executions using the inherent heterogeneity in software behavior as steering indicator. Thus, we start deeply analyzing the heterogenous behaviour of the software run on top of general-purpose processors and then matching it on top of a heterogeneously distributed hardware, which explicitly exploit heterogeneous hardware requirements. Only by being heterogeneity-aware in software, and appropriately matching this software heterogeneity on top of hardware heterogeneity, may we effectively obtain better processor designs.The PhD dissertation is comprised of four main contributions that cover both multithreaded single-core (hdSMT) and multicore (TCA Algorithm, hTCA Framework and MFLUSH) scenarios, deeply explained in their corresponding chapters in the PhD dissertation memory. Overall, these contributions cover a significant range of the Heterogeneity-Aware Processors' design space. Within this design space, we have focused on the state-of-the-art trend in processor design: Multithreaded Multicore (CMP+SMT) Processors.We make special emphasis on the MPsim simulation tool, specifically designed and developed for this PhD dissertation. This tool has already gone beyond this PhD dissertation, becoming a reference tool by an important group of researchers spread over the Computer Architecture Department (DAC) at the Polytechnic University of Catalonia (UPC), the Barcelona Supercomputing Center (BSC) and the University of Las Palmas de Gran Canaria (ULPGC)

    Scalable Algorithms for the Analysis of Massive Networks

    Get PDF
    Die Netzwerkanalyse zielt darauf ab, nicht-triviale Erkenntnisse aus vernetzten Daten zu gewinnen. Beispiele für diese Erkenntnisse sind die Wichtigkeit einer Entität im Verhältnis zu anderen nach bestimmten Kriterien oder das Finden des am besten geeigneten Partners für jeden Teilnehmer eines Netzwerks - bekannt als Maximum Weighted Matching (MWM). Da der Begriff der Wichtigkeit an die zu betrachtende Anwendung gebunden ist, wurden zahlreiche Zentralitätsmaße eingeführt. Diese Maße stammen hierbei aus Jahrzehnten, in denen die Rechenleistung sehr begrenzt war und die Netzwerke im Vergleich zu heute viel kleiner waren. Heute sind massive Netzwerke mit Millionen von Kanten allgegenwärtig und eine triviale Berechnung von Zentralitätsmaßen ist oft zu zeitaufwändig. Darüber hinaus ist die Suche nach der Gruppe von k Knoten mit hoher Zentralität eine noch kostspieligere Aufgabe. Skalierbare Algorithmen zur Identifizierung hochzentraler (Gruppen von) Knoten in großen Graphen sind von großer Bedeutung für eine umfassende Netzwerkanalyse. Heutigen Netzwerke verändern sich zusätzlich im zeitlichen Verlauf und die effiziente Aktualisierung der Ergebnisse nach einer Änderung ist eine Herausforderung. Effiziente dynamische Algorithmen sind daher ein weiterer wesentlicher Bestandteil moderner Analyse-Pipelines. Hauptziel dieser Arbeit ist es, skalierbare algorithmische Lösungen für die zwei oben genannten Probleme zu finden. Die meisten unserer Algorithmen benötigen Sekunden bis einige Minuten, um diese Aufgaben in realen Netzwerken mit bis zu Hunderten Millionen von Kanten zu lösen, was eine deutliche Verbesserung gegenüber dem Stand der Technik darstellt. Außerdem erweitern wir einen modernen Algorithmus für MWM auf dynamische Graphen. Experimente zeigen, dass unser dynamischer MWM-Algorithmus Aktualisierungen in Graphen mit Milliarden von Kanten in Millisekunden bewältigt.Network analysis aims to unveil non-trivial insights from networked data by studying relationship patterns between the entities of a network. Among these insights, a popular one is to quantify the importance of an entity with respect to the others according to some criteria. Another one is to find the most suitable matching partner for each participant of a network knowing the pairwise preferences of the participants to be matched with each other - known as Maximum Weighted Matching (MWM). Since the notion of importance is tied to the application under consideration, numerous centrality measures have been introduced. Many of these measures, however, were conceived in a time when computing power was very limited and networks were much smaller compared to today's, and thus scalability to large datasets was not considered. Today, massive networks with millions of edges are ubiquitous, and a complete exact computation for traditional centrality measures are often too time-consuming. This issue is amplified if our objective is to find the group of k vertices that is the most central as a group. Scalable algorithms to identify highly central (groups of) vertices on massive graphs are thus of pivotal importance for large-scale network analysis. In addition to their size, today's networks often evolve over time, which poses the challenge of efficiently updating results after a change occurs. Hence, efficient dynamic algorithms are essential for modern network analysis pipelines. In this work, we propose scalable algorithms for identifying important vertices in a network, and for efficiently updating them in evolving networks. In real-world graphs with hundreds of millions of edges, most of our algorithms require seconds to a few minutes to perform these tasks. Further, we extend a state-of-the-art algorithm for MWM to dynamic graphs. Experiments show that our dynamic MWM algorithm handles updates in graphs with billion edges in milliseconds
    corecore