6 research outputs found

    Nonlinear solid mechanics analysis using the parallel selective element-free Galerkin method

    Get PDF
    A variety of meshless methods have been developed in the last fifteen years with an intention to solve practical engineering problems, but are limited to small academic problems due to associated high computational cost as compared to the standard finite element methods (FEM). The main objective of this thesis is the development of an efficient and accurate algorithm based on meshless methods for the solution of problems involving both material and geometrical nonlinearities, which are of practical importance in many engineering applications, including geomechanics, metal forming and biomechanics. One of the most commonly used meshless methods, the element-free Galerkin method (EFGM) is used in this research, in which maximum entropy shape functions (max-ent) are used instead of the standard moving least squares shape functions, which provides direct imposition of the essential boundary conditions. Initially, theoretical background and corresponding computer implementations of the EFGM are described for linear and nonlinear problems. The Prandtl-Reuss constitutive model is used to model elasto-plasticity, both updated and total Lagrangian formulations are used to model finite deformation and consistent or algorithmic tangent is used to allow the quadratic rate of asymptotic convergence of the global Newton-Raphson algorithm. An adaptive strategy is developed for the EFGM for two- and three-dimensional nonlinear problems based on the Chung & Belytschko error estimation procedure, which was originally proposed for linear elastic problems. A new FE-EFGM coupling procedure based on max-ent shape functions is proposed for linear and geometrically nonlinear problems, in which there is no need of interface elements between the FE and EFG regions or any other special treatment, as required in the most previous research. The proposed coupling procedure is extended to become adaptive FE-EFGM coupling for two- and three-dimensional linear and nonlinear problems, in which the Zienkiewicz & Zhu error estimation procedure with the superconvergent patch recovery method for strains and stresses recovery are used in the FE region of the problem domain, while the Chung & Belytschko error estimation procedure is used in the EFG region of the problem domain. Parallel computer algorithms based on distributed memory parallel computer architecture are also developed for different numerical techniques proposed in this thesis. In the parallel program, the message passing interface library is used for inter-processor communication and open-source software packages, METIS and MUMPS are used for the automatic domain decomposition and solution of the final system of linear equations respectively. Separate numerical examples are presented for each algorithm to demonstrate its correct implementation and performance, and results are compared with the corresponding analytical or reference results

    A Novel Placement Algorithm for the Controllers Of the Virtual Networks (COVN) in SD-WAN with Multiple VNs

    Get PDF
    The escalation of communication demands and the emergence of new telecommunication concepts such as 5G cellular system and smart cities requires the consolidation of a flexible and manageable backbone network. These requirements motivated the researcher to come up with a new placement algorithm for the Controller of Virtual Network (COVN). This is because SDN and network virtualisation techniques (NFV and NV), are integrated to produce multiple virtual networks running on a single SD-WAN infrastructure, which serves the new backbone. One of the significant challenges of SD-WAN is determining the number and the locations of its controllers to optimise the network latency and reliability. This problem is fairly investigated and solved by several controller placement algorithms where the focus is only on physical controllers. The advent of the sliced SD-WAN produces a new challenge, which necessitates the SDWAN controllers (physical controller/hosted server) to run multiple instances of controllers (virtual controllers). Every virtual network is managed by its virtual controllers. This calls for an algorithm to determine the number and the positions of physical and virtual controllers of the multiple virtual SD-WANs. According to the literature review and to the best of the author knowledge, this problem is neither examined nor yet solved. To address this issue, the researcher designed a novel COVN placement algorithm to compute the controller placement of the physical controllers, then calculate the controller placement of every virtual SD-WAN independently, taking into consideration the controller placement of other virtual SD-WANs. COVN placement does not partition the SD-WAN when placing the physical controllers, unlike all previous placement algorithms. Instead, it identifies the nodes of the optimal reliability and latency to all switches of the network. Then, it partitions every VN separately to create its independent controller placement. COVN placement optimises the reliability and the latency according to the desired weights. It also maintains the load balancing and the optimal resources utilisation. Moreover, it supports the recovering of the controller failure. This novel algorithm is intensively evaluated using the produced COVN simulator and the developed Mininet emulator. The results indicate that COVN placement achieves the required optimisations mentioned above. Also, the implementations disclose that COVN placement can compute the controller placement for a large network ( 754 switches) in very small computation time (49.53 s). In addition, COVN placement is compared to POCO algorithm. The outcome reveals that COVN placement provides better reliability in about 30.76% and a bit higher latency in about 1.38%. Further, it surpasses POCO by constructing the balanced clusters according to the switch loads and offering the more efficient placement to recover controller-failure

    Dynamic load balancing of parallel road traffic simulation

    Get PDF
    The objective of this research was to investigate, develop and evaluate dynamic load-balancing strategies for parallel execution of microscopic road traffic simulations. Urban road traffic simulation presents irregular, and dynamically varying distributed computational load for a parallel processor system. The dynamic nature of road traffic simulation systems lead to uneven load distribution during simulation, even for a system that starts off with even load distributions. Load balancing is a potential way of achieving improved performance by reallocating work from highly loaded processors to lightly loaded processors leading to a reduction in the overall computational time. In dynamic load balancing, workloads are adjusted continually or periodically throughout the computation. In this thesis load balancing strategies were evaluated and some load balancing policies developed. A load index and a profitability determination algorithms were developed. These were used to enhance two load balancing algorithms. One of the algorithms exhibits local communications and distributed load evaluation between the neighbour partitions (diffusion algorithm) and the other algorithm exhibits both local and global communications while the decision making is centralized (MaS algorithm). The enhanced algorithms were implemented and synthesized with a research parallel traffic simulation. The performance of the research parallel traffic simulator, optimized with the two modified dynamic load balancing strategies were studied.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    High-Quality Hypergraph Partitioning

    Get PDF
    This dissertation focuses on computing high-quality solutions for the NP-hard balanced hypergraph partitioning problem: Given a hypergraph and an integer kk, partition its vertex set into kk disjoint blocks of bounded size, while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric. Since the problem is computationally intractable, heuristics are used in practice - the most prominent being the three-phase multi-level paradigm: During coarsening, the hypergraph is successively contracted to obtain a hierarchy of smaller instances. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, refinement algorithms try to improve the current solution. With this work, we give a brief overview of the field and present several algorithmic improvements to the multi-level paradigm. Instead of using a logarithmic number of levels like traditional algorithms, we present two coarsening algorithms that create a hierarchy of (nearly) nn levels, where nn is the number of vertices. This makes consecutive levels as similar as possible and provides many opportunities for refinement algorithms to improve the partition. This approach is made feasible in practice by tailoring all algorithms and data structures to the nn-level paradigm, and developing lazy-evaluation techniques, caching mechanisms and early stopping criteria to speed up the partitioning process. Furthermore, we propose a sparsification algorithm based on locality-sensitive hashing that improves the running time for hypergraphs with large hyperedges, and show that incorporating global information about the community structure into the coarsening process improves quality. Moreover, we present a portfolio-based initial partitioning approach, and propose three refinement algorithms. Two are based on the Fiduccia-Mattheyses (FM) heuristic, but perform a highly localized search at each level. While one is designed for two-way partitioning, the other is the first FM-style algorithm that can be efficiently employed in the multi-level setting to directly improve kk-way partitions. The third algorithm uses max-flow computations on pairs of blocks to refine kk-way partitions. Finally, we present the first memetic multi-level hypergraph partitioning algorithm for an extensive exploration of the global solution space. All contributions are made available through our open-source framework KaHyPar. In a comprehensive experimental study, we compare KaHyPar with hMETIS, PaToH, Mondriaan, Zoltan-AlgD, and HYPE on a wide range of hypergraphs from several application areas. Our results indicate that KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Moreover, KaHyPar compares favorably with the current best graph partitioning system KaFFPa - both in terms of solution quality and running time
    corecore