10 research outputs found

    Adaptive Partitioning for Large-Scale Dynamic Graphs

    Get PDF
    Abstract—In the last years, large-scale graph processing has gained increasing attention, with most recent systems placing particular emphasis on latency. One possible technique to improve runtime performance in a distributed graph processing system is to reduce network communication. The most notable way to achieve this goal is to partition the graph by minimizing the num-ber of edges that connect vertices assigned to different machines, while keeping the load balanced. However, real-world graphs are highly dynamic, with vertices and edges being constantly added and removed. Carefully updating the partitioning of the graph to reflect these changes is necessary to avoid the introduction of an extensive number of cut edges, which would gradually worsen computation performance. In this paper we show that performance degradation in dynamic graph processing systems can be avoided by adapting continuously the graph partitions as the graph changes. We present a novel highly scalable adaptive partitioning strategy, and show a number of refinements that make it work under the constraints of a large-scale distributed system. The partitioning strategy is based on iterative vertex migrations, relying only on local information. We have implemented the technique in a graph processing system, and we show through three real-world scenarios how adapting graph partitioning reduces execution time by over 50 % when compared to commonly used hash-partitioning. I

    Concurrent query analytics on distributed graph systems

    Get PDF
    Large-scale graph problems, such as shortest path finding or social media graph evaluations, are an important area in computer science. In recent years, important graph applications such as PowerGraph or PowerLyra lead to a shift of paradigms of distributed graph processing systems towards processing of multiple parallel queries rather than a single global graph algorithm. Queries usually have locality in graphs, i.e. involve only a subset of the graphs vertices. Suitable partitioning and query synchronization approaches can minimize communication overhead and query latency by exploiting this locality. Additionally, partitioning algorithms must be dynamic as the number and locality of queries can change over time. Existing graph processing systems are not optimized to exploit query locality or to adapt graph partitioning at runtime. In this thesis we present Q-Graph, an open source, multitenant graph analytics system with dynamic graph repartitioning. Q-Graph's query-aware partitioning algorithm Q-Cut performs adaptive graph partitioning at runtime. Compared to static partitioning strategies, Q-Cut can exploit runtime knowledge about query locality and workload to improve partitioning dynamically. Furthermore a case study with an implementation for the shortest path problem and point search queries is presented. We present evaluations showing the performance of Q-Graph and the effectiveness of Q-Cut. Measurements show that Q-Cut improves query processing performance by up to 60% and automatically adapts partitioning on changing query workload and locality, outperforming partitioning methods using domain knowledge.Large-scale Graph Probleme, wie beispielsweise Kürzeste-Wege Suchen oder Social Media Evaluationen, sind ein wichtiger Bereich in der Informatik. In den letzten Jahren zeigen Graph Anwendungen wie PowerGraph oder PowerLyra einen Paradigmenwechsel von verteilten Graph Systemen hin zu parallelen Anfragen, statt der Verarbeitung einzelner, globaler Anfragen. Solche Anfragen besitzen üblicherweise eine Lokalität in der Graph Datenstruktur, d.h. sie betreffen nur einen Teilbereich der Knoten eines Graphs. Geeignete Ansätze zur Partitionierung können dies nutzen um den Kommunikationsaufwand zu reduzieren und die Anfragenlatenz zu minimieren. Außerdem müssen Partitionierungs Algorithmen dynamisch sein, da sich die Anzahl und Lokalität von Anfragen über die Zeit ändern kann. Existierende Graph Systeme sind nicht optimiert um Anfragen Lokalität zu berücksichtigen oder die Graph Partitionierung zur Laufzeit anzupassen. In dieser Arbeit stellen wir Q-Graph vor, ein Open Source Graph System zur Verarbeitung nebenläufiger Anfragen und dynamischer Graph Partitionierung. Q-Graphs anfragenbasierter Partitionierungs Algorithmus Q-Cut kann die Partitionierung zur Laufzeit anpassen. Im Vergleich zu statischen Partitionierungen können hierbei Laufzeitinformationen über Anfragen Lokalität und Arbeitslast einbezogen werden. Außerdem wird eine Implementierung für das Kürzeste-Wege Problem vorgestellt. Evaluationen zeigen die Leistungsfähigkeit von Q-Graph und die Effektivität von Q-Cut. Messungen zeigen, dass Q-Cut die Ausführungszeit von Anfragen um bis zu 60% verbessern kannund in der Lage ist, die Partitionierung an sich verändernde Anfragen Lokalität und Arbeitslast anzupassen. Q-Cut übertrifft dabei Methoden welche Domänenwissen zur Partitioninerung verwenden

    The effect of mesh partitioning quality on the performance of a scientific application in an HPC environment

    Get PDF
    The need of fast and reliable methods to solve large linear systems of equations is growing rapidly. Because this is a challenging problem, several techniques have been developed in order to solve it accurately and efficiently. Geometric Multigrid methods are being used to solve these problems, as they accelerate the convergence to a solution. With these methods, it is possible to use a coarser grid as an input, reducing the problem domain and thus reducing the computational cost.The focus of this thesis is the development of new algorithms to generate a sequence of coarser grids from the original grid. By treating this problem as a minimization problem, one can attempt to optimize the overall grid quality by choosing how to merge elements. In order to evaluate our algorithms, we are going define how to quantify the overall grid quality, and therefore analyse the grids obtained by them. We are also going to use the multilevel grid construction paradigm, which is known to be adequate to solve similar problems.Such construction can be done in parallel, by adding a small overhead and not sacrificing the quality produced by our multilevel constructor. Hence, we can achieve a high level of concurrency

    Artificial Intelligence Technology

    Get PDF
    This open access book aims to give our readers a basic outline of today’s research and technology developments on artificial intelligence (AI), help them to have a general understanding of this trend, and familiarize them with the current research hotspots, as well as part of the fundamental and common theories and methodologies that are widely accepted in AI research and application. This book is written in comprehensible and plain language, featuring clearly explained theories and concepts and extensive analysis and examples. Some of the traditional findings are skipped in narration on the premise of a relatively comprehensive introduction to the evolution of artificial intelligence technology. The book provides a detailed elaboration of the basic concepts of AI, machine learning, as well as other relevant topics, including deep learning, deep learning framework, Huawei MindSpore AI development framework, Huawei Atlas computing platform, Huawei AI open platform for smart terminals, and Huawei CLOUD Enterprise Intelligence application platform. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud computing, and smart computing to artificial intelligence

    Artificial Intelligence Technology

    Get PDF
    This open access book aims to give our readers a basic outline of today’s research and technology developments on artificial intelligence (AI), help them to have a general understanding of this trend, and familiarize them with the current research hotspots, as well as part of the fundamental and common theories and methodologies that are widely accepted in AI research and application. This book is written in comprehensible and plain language, featuring clearly explained theories and concepts and extensive analysis and examples. Some of the traditional findings are skipped in narration on the premise of a relatively comprehensive introduction to the evolution of artificial intelligence technology. The book provides a detailed elaboration of the basic concepts of AI, machine learning, as well as other relevant topics, including deep learning, deep learning framework, Huawei MindSpore AI development framework, Huawei Atlas computing platform, Huawei AI open platform for smart terminals, and Huawei CLOUD Enterprise Intelligence application platform. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud computing, and smart computing to artificial intelligence

    Adaptive Graph Partitioning Wireless Protocol

    No full text
    : We propose a new wireless protocol, called the Adaptive Graph Partitioning Wireless Protocol, that dynamically partitions nodes into the available logical channels in such a manner so as to improve performance by balancing the intra-logical channel traffic between the available logical channels, and minimising the inter-logical channel traffic. We have simulated the performance of a simplified model of the system, at an instant in time, based on the following parameters: nodal load, number of channels, node buffer size, central entity buffer size, channel locality factor, and packet transmission probability. The simulations show the benefits of graph partitioning: when the channel loads are balanced and the channel locality factor is high, performance is significantly improved. We have also developed analysis for this simplified model of the system which serves as a upper bound calculation for packet loss in the system. 1. INTRODUCTION Wireless Local Area Networks (LANs) will exten..

    Inter-Cell Interference-Aware Radio Resource Management for Femtocell Networks.

    No full text
    The widespread data demand in emerging wireless cellular technologies necessitates the evolution of traditional networks’ deployment to accommodate the ever increasing coverage and capacity requirements. In emerging wireless systems a hierarchical multi-level network that consists of a mixture of outdoor small cells (relays) and indoor small cells (femtocell) deployments underneath the traditional macro-cell architecture can be seen as a key deployment strategy to meet these growing capacity demands. In such networks, Femtocell technology has attracted much attention as a key “player” to address coverage and capacity issues mainly in home and enterprise environments. However, a major challenge that arises in such indoor networks originates from the inter-cell interference between the femtocells (commonly known as co-tier interference), assuming that femtocells share the same spectrum. The main objectives of this thesis are to investigate inter-cell interference in femtocell networks and to propose efficient multi-cell scheduling mechanisms that can mitigate inter-cell interference in dense femtocell environments while maintaining spectral efficiency at acceptable level across the cells. We begin with investigating the impact of co-tier interference in femtocells, highlighting the necessity of interference mitigation mechanisms for arbitrary deployment of femtocells. In this direction, a novel low-complexity graph-coloring based interference coordination mechanism is proposed to be applied on top of intra-cell radio resource management. We additionally propose two locally centralized multi-cell scheduling frameworks that enclose adaptive graph-partitioning and weighted capacity maximization concepts. In particular, we decompose the problem in the latter case based on the Exact Generalized Travelling Salesman Problem as a close match in graph-based solutions. Extensive evaluation is provided by simulations showing a significant improvement over the state-of-the-art multi-cell scheduling benchmarks in terms of outage probability as well as user and cell throughput and thus the proposed algorithms are promising candidates of multi-cell scheduling in next generation small cell networks
    corecore