21,139 research outputs found

    Discrete Particle Swarm Optimization for the minimum labelling Steiner tree problem

    Get PDF
    Particle Swarm Optimization is an evolutionary method inspired by the social behaviour of individuals inside swarms in nature. Solutions of the problem are modelled as members of the swarm which fly in the solution space. The evolution is obtained from the continuous movement of the particles that constitute the swarm submitted to the effect of the inertia and the attraction of the members who lead the swarm. This work focuses on a recent Discrete Particle Swarm Optimization for combinatorial optimization, called Jumping Particle Swarm Optimization. Its effectiveness is illustrated on the minimum labelling Steiner tree problem: given an undirected labelled connected graph, the aim is to find a spanning tree covering a given subset of nodes, whose edges have the smallest number of distinct labels

    Approximating acyclicity parameters of sparse hypergraphs

    Get PDF
    The notions of hypertree width and generalized hypertree width were introduced by Gottlob, Leone, and Scarcello in order to extend the concept of hypergraph acyclicity. These notions were further generalized by Grohe and Marx, who introduced the fractional hypertree width of a hypergraph. All these width parameters on hypergraphs are useful for extending tractability of many problems in database theory and artificial intelligence. In this paper, we study the approximability of (generalized, fractional) hyper treewidth of sparse hypergraphs where the criterion of sparsity reflects the sparsity of their incidence graphs. Our first step is to prove that the (generalized, fractional) hypertree width of a hypergraph H is constant-factor sandwiched by the treewidth of its incidence graph, when the incidence graph belongs to some apex-minor-free graph class. This determines the combinatorial borderline above which the notion of (generalized, fractional) hypertree width becomes essentially more general than treewidth, justifying that way its functionality as a hypergraph acyclicity measure. While for more general sparse families of hypergraphs treewidth of incidence graphs and all hypertree width parameters may differ arbitrarily, there are sparse families where a constant factor approximation algorithm is possible. In particular, we give a constant factor approximation polynomial time algorithm for (generalized, fractional) hypertree width on hypergraphs whose incidence graphs belong to some H-minor-free graph class

    How Many Pairwise Preferences Do We Need to Rank A Graph Consistently?

    Full text link
    We consider the problem of optimal recovery of true ranking of nn items from a randomly chosen subset of their pairwise preferences. It is well known that without any further assumption, one requires a sample size of Ω(n2)\Omega(n^2) for the purpose. We analyze the problem with an additional structure of relational graph G([n],E)G([n],E) over the nn items added with an assumption of \emph{locality}: Neighboring items are similar in their rankings. Noting the preferential nature of the data, we choose to embed not the graph, but, its \emph{strong product} to capture the pairwise node relationships. Furthermore, unlike existing literature that uses Laplacian embedding for graph based learning problems, we use a richer class of graph embeddings---\emph{orthonormal representations}---that includes (normalized) Laplacian as its special case. Our proposed algorithm, {\it Pref-Rank}, predicts the underlying ranking using an SVM based approach over the chosen embedding of the product graph, and is the first to provide \emph{statistical consistency} on two ranking losses: \emph{Kendall's tau} and \emph{Spearman's footrule}, with a required sample complexity of O(n2χ(Gˉ))23O(n^2 \chi(\bar{G}))^{\frac{2}{3}} pairs, χ(Gˉ)\chi(\bar{G}) being the \emph{chromatic number} of the complement graph Gˉ\bar{G}. Clearly, our sample complexity is smaller for dense graphs, with χ(Gˉ)\chi(\bar G) characterizing the degree of node connectivity, which is also intuitive due to the locality assumption e.g. O(n43)O(n^\frac{4}{3}) for union of kk-cliques, or O(n53)O(n^\frac{5}{3}) for random and power law graphs etc.---a quantity much smaller than the fundamental limit of Ω(n2)\Omega(n^2) for large nn. This, for the first time, relates ranking complexity to structural properties of the graph. We also report experimental evaluations on different synthetic and real datasets, where our algorithm is shown to outperform the state-of-the-art methods.Comment: In Thirty-Third AAAI Conference on Artificial Intelligence, 201
    • …
    corecore