266 research outputs found

    Large-scale multi-objective influence maximisation with network downscaling

    Get PDF
    Finding the most influential nodes in a network is a computationally hard problem with several possible applications in various kinds of network-based problems. While several methods have been proposed for tackling the influence maximisation (IM) problem, their runtime typically scales poorly when the network size increases. Here, we propose an original method, based on network downscaling, that allows a multi-objective evolutionary algorithm (MOEA) to solve the IM problem on a reduced scale network, while preserving the relevant properties of the original network. The downscaled solution is then upscaled to the original network, using a mechanism based on centrality metrics such as PageRank. Our results on eight large networks (including two with ∌\sim50k nodes) demonstrate the effectiveness of the proposed method with a more than 10-fold runtime gain compared to the time needed on the original network, and an up to 82%82\% time reduction compared to CELF

    On green routing and scheduling problem

    Full text link
    The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools

    Analysis of reliable deployment of TDOA local positioning architectures

    Get PDF
    .Local Positioning Systems (LPS) are supposing an attractive research topic over the last few years. LPS are ad-hoc deployments of wireless sensor networks for particularly adapt to the environment characteristics in harsh environments. Among LPS, those based on temporal measurements stand out for their trade-off among accuracy, robustness and costs. But, regardless the LPS architecture considered, an optimization of the sensor distribution is required for achieving competitive results. Recent studies have shown that under optimized node distributions, time-based LPS cumulate the bigger error bounds due to synchronization errors. Consequently, asynchronous architectures such as Asynchronous Time Difference of Arrival (A-TDOA) have been recently proposed. However, the A-TDOA architecture supposes the concentration of the time measurement in a single clock of a coordinator sensor making this architecture less versatile. In this paper, we present an optimization methodology for overcoming the drawbacks of the A-TDOA architecture in nominal and failure conditions with regards to the synchronous TDOA. Results show that this optimization strategy allows the reduction of the uncertainties in the target location by 79% and 89.5% and the enhancement of the convergence properties by 86% and 33% of the A-TDOA architecture with regards to the TDOA synchronous architecture in two different application scenarios. In addition, maximum convergence points are more easily found in the A-TDOA in both configurations concluding the benefits of this architecture in LPS high-demanded applicationS

    High-Quality Hypergraph Partitioning

    Get PDF
    This dissertation focuses on computing high-quality solutions for the NP-hard balanced hypergraph partitioning problem: Given a hypergraph and an integer kk, partition its vertex set into kk disjoint blocks of bounded size, while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric. Since the problem is computationally intractable, heuristics are used in practice - the most prominent being the three-phase multi-level paradigm: During coarsening, the hypergraph is successively contracted to obtain a hierarchy of smaller instances. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, refinement algorithms try to improve the current solution. With this work, we give a brief overview of the field and present several algorithmic improvements to the multi-level paradigm. Instead of using a logarithmic number of levels like traditional algorithms, we present two coarsening algorithms that create a hierarchy of (nearly) nn levels, where nn is the number of vertices. This makes consecutive levels as similar as possible and provides many opportunities for refinement algorithms to improve the partition. This approach is made feasible in practice by tailoring all algorithms and data structures to the nn-level paradigm, and developing lazy-evaluation techniques, caching mechanisms and early stopping criteria to speed up the partitioning process. Furthermore, we propose a sparsification algorithm based on locality-sensitive hashing that improves the running time for hypergraphs with large hyperedges, and show that incorporating global information about the community structure into the coarsening process improves quality. Moreover, we present a portfolio-based initial partitioning approach, and propose three refinement algorithms. Two are based on the Fiduccia-Mattheyses (FM) heuristic, but perform a highly localized search at each level. While one is designed for two-way partitioning, the other is the first FM-style algorithm that can be efficiently employed in the multi-level setting to directly improve kk-way partitions. The third algorithm uses max-flow computations on pairs of blocks to refine kk-way partitions. Finally, we present the first memetic multi-level hypergraph partitioning algorithm for an extensive exploration of the global solution space. All contributions are made available through our open-source framework KaHyPar. In a comprehensive experimental study, we compare KaHyPar with hMETIS, PaToH, Mondriaan, Zoltan-AlgD, and HYPE on a wide range of hypergraphs from several application areas. Our results indicate that KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Moreover, KaHyPar compares favorably with the current best graph partitioning system KaFFPa - both in terms of solution quality and running time

    Optimized Hidden Markov Model based on Constrained Particle Swarm Optimization

    Get PDF
    International audienceAs one of Bayesian analysis tools, Hidden Markov Model (HMM) has been used to in extensive applications. Most HMMs are solved by Baum-Welch algorithm (BWHMM) to predict the model parameters, which is difficult to find global optimal solutions. This paper proposes an optimized Hidden Markov Model with Particle Swarm Optimization (PSO) algorithm and so is called PSOHMM. In order to overcome the statistical constraints in HMM, the paper develops re-normalization and re-mapping mechanisms to ensure the constraints in HMM. The experiments have shown that PSOHMM can search better solution than BWHMM, and has faster convergence speed
    • 

    corecore