20 research outputs found

    Hybridation of Bayesian networks and evolutionary algorithms for multi-objective optimization in an integrated product design and project management context

    Get PDF
    A better integration of preliminary product design and project management processes at early steps of system design is nowadays a key industrial issue. Therefore, the aim is to make firms evolve from classical sequential approach (first product design the project design and management) to new integrated approaches. In this paper, a model for integrated product/project optimization is first proposed which allows taking into account simultaneously decisions coming from the product and project managers. However, the resulting model has an important underlying complexity, and a multi-objective optimization technique is required to provide managers with appropriate scenarios in a reasonable amount of time. The proposed approach is based on an original evolutionary algorithm called evolutionary algorithm oriented by knowledge (EAOK). This algorithm is based on the interaction between an adapted evolutionary algorithm and a model of knowledge (MoK) used for giving relevant orientations during the search process. The evolutionary operators of the EA are modified in order to take into account these orientations. The MoK is based on the Bayesian Network formalism and is built both from expert knowledge and from individuals generated by the EA. A learning process permits to update probabilities of the BN from a set of selected individuals. At each cycle of the EA, probabilities contained into the MoK are used to give some bias to the new evolutionary operators. This method ensures both a faster and effective optimization, but it also provides the decision maker with a graphic and interactive model of knowledge linked to the studied project. An experimental platform has been developed to experiment the algorithm and a large campaign of tests permits to compare different strategies as well as the benefits of this novel approach in comparison with a classical EA

    Structural topology optimisation based on the Boundary Element and Level Set methods

    Get PDF
    The research work presented in this thesis is related to the development of structural optimisation algorithms based on the boundary element and level set methods for two and three-dimensional linear elastic problems. In the initial implementation, a stress based evolutionary structural optimisation (ESO) approach has been used to add and remove material simultaneously for the solution of two-dimensional optimisation problems. The level set method (LSM) is used to provide an implicit description of the structural geometry, which is also capable of automatically handling topological changes, i.e. holes merging with each other or with the boundary. The classical level set based optimisation methods are dependent on initial designs with pre-existing holes. However, the proposed method automatically introduces internal cavities utilising a stress based hole insertion criteria, and thereby eliminates the use of initial designs with pre-existing holes. A detailed study has also been carried out to investigate the relationship between a stress and topological derivative based hole insertion criteria within a boundary element method (BEM) and LSM framework. The evolving structural geometry (i.e. the zero level set contours) is represented by non-uniform rational b-splines (NURBS), providing a smooth geometry throughout the optimisation process and completely eliminating jagged edges. The BEM and LSM are further combined with a shape sensitivity approach for the solution of minimum compliance problems in two-dimensions. The proposed sensitivity based method is capable of automatically inserting holes during the optimisation process using a topological derivative approach. In order to investigate the associated advantages and disadvantages of the evolutionary and sensitivity based optimisation methods a comparative study has also been carried out. There are two advantages associated with the use of LSM in three-dimensional topology optimisation. Firstly, the LSM may readily be applied to three-dimensional space, and it is shown how this can be linked to a 3D BEM solver. Secondly, the holes appear automatically through the intersection of two surfaces moving towards each other. Therefore, the use of LSM eliminates the need for an additional hole insertion mechanism as both shape and topology optimisation can be performed at the same time. A complete algorithm is proposed and tested for BEM and LSM based topology optimisation in three-dimensions. Optimal geometries compare well against those in the literature for a range of benchmark examples

    Recent Advances in Graph Partitioning

    Full text link
    We survey recent trends in practical algorithms for balanced graph partitioning together with applications and future research directions

    Better Process Mapping and Sparse Quadratic Assignment

    Get PDF
    Communication and topology aware process mapping is a powerful approach to reduce communication time in parallel applications with known communication patterns on large, distributed memory systems. We address the problem as a quadratic assignment problem (QAP), and present algorithms to construct initial mappings of processes to processors as well as fast local search algorithms to further improve the mappings. By exploiting assumptions that typically hold for applications and modern supercomputer systems such as sparse communication patterns and hierarchically organized communication systems, we arrive at significantly more powerful algorithms for these special QAPs. Our multilevel construction algorithms employ recently developed, perfectly balanced graph partitioning techniques and excessively exploit the given communication system hierarchy. We present improvements to a local search algorithm of Brandfass et al. (2013), and decrease the running time by reducing the time needed to perform swaps in the assignment as well as by carefully constraining local search neighborhoods. Experiments indicate that our algorithms not only dramatically speed up local search, but due to the multilevel approach also find much better solutions in practice

    Aggregative Coarsening for Multilevel Hypergraph Partitioning

    Get PDF
    Algorithms for many hypergraph problems, including partitioning, utilize multilevel frameworks to achieve a good trade-off between the performance and the quality of results. In this paper we introduce two novel aggregative coarsening schemes and incorporate them within state-of-the-art hypergraph partitioner Zoltan. Our coarsening schemes are inspired by the algebraic multigrid and stable matching approaches. We demonstrate the effectiveness of the developed schemes as a part of multilevel hypergraph partitioning framework on a wide range of problems

    A new diffusion-based multilevel algorithm for computing graph partitions of very high quality

    Full text link

    Community detection in graphs

    Full text link
    The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i. e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e. g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.Comment: Review article. 103 pages, 42 figures, 2 tables. Two sections expanded + minor modifications. Three figures + one table + references added. Final version published in Physics Report

    Verification of system properties of polynomial systems using discrete-time approximations and set-based analysis

    Get PDF
    Magdeburg, Univ., Fak. fĂŒr Elektrotechnik und Informationstechnik, Diss., 2015von Philipp Rumschinsk

    Heuristic Approaches to Solve the Frequency Assignment Problem

    Get PDF
    The frequency assignment problem is a computationally hard problem with many applications including the mobile telephone industry and tactical communications. The problem may be modelled mathematically as a T-colouring problem for an undirected weighted graph; it is required to assign to each vertex a value from a given set such that for each edge the difference in absolute value between the values at the corresponding vertices is greater than or equal to the weight of the edge. This problem was solved using novel and existing metaheuristic algorithms and their relative successes were compared. Early work of this thesis used greedy, steepest descent and backtracking algorithms as a means of investigating the factors which influence the performance of an algorithm (selection of frequency, ordering of variables, provision of an incremental objective function). Later simulated annealing, tabu search and divide and conquer techniques were used and the results compared. A novel divide and conquer technique incorporating metaheuristics is described and results using test data based on real problems is presented. The divide and conquer technique (with either tabu search or simulated annealing) was found to improve significantly upon the corresponding metaheuristic when implemented alone and acting on non-trivial scenarios. The results were significant and consistent. The divide and conquer (with simulated annealing) algorithm in particular was shown to be robust and efficient in its solution of the frequency assignment problems presented. The results presented in this thesis consistently out-perform those obtained by the Defence, Evaluation and Research Agency, Malvern. In addition this method lends itself to parallelisation since the problem is broken into smaller independent parts. The divide and conquer algorithm does not exploit knowledge of the constraint network and should be applicable to a number of different problem domains. Algorithms capable of solving the frequency assignment problem most effectively will become valuable as demand for the electromagnetic spectrum continues to grow
    corecore