8,309 research outputs found

    Forest matrices around the Laplacian matrix

    Get PDF
    We study the matrices Q_k of in-forests of a weighted digraph G and their connections with the Laplacian matrix L of G. The (i,j) entry of Q_k is the total weight of spanning converging forests (in-forests) with k arcs such that i belongs to a tree rooted at j. The forest matrices, Q_k, can be calculated recursively and expressed by polynomials in the Laplacian matrix; they provide representations for the generalized inverses, the powers, and some eigenvectors of L. The normalized in-forest matrices are row stochastic; the normalized matrix of maximum in-forests is the eigenprojection of the Laplacian matrix, which provides an immediate proof of the Markov chain tree theorem. A source of these results is the fact that matrices Q_k are the matrix coefficients in the polynomial expansion of adj(a*I+L). Thereby they are precisely Faddeev's matrices for -L. Keywords: Weighted digraph; Laplacian matrix; Spanning forest; Matrix-forest theorem; Leverrier-Faddeev method; Markov chain tree theorem; Eigenprojection; Generalized inverse; Singular M-matrixComment: 19 pages, presented at the Edinburgh (2001) Conference on Algebraic Graph Theor

    Critical Properties of the One-Dimensional Forest-Fire Model

    Full text link
    The one-dimensional forest-fire model including lightnings is studied numerically and analytically. For the tree correlation function, a new correlation length with critical exponent \nu ~ 5/6 is found by simulations. A Hamiltonian formulation is introduced which enables one to study the stationary state close to the critical point using quantum-mechanical perturbation theory. With this formulation also the structure of the low-lying relaxation spectrum and the critical behaviour of the smallest complex gap are investigated numerically. Finally, it is shown that critical correlation functions can be obtained from a simplified model involving only the total number of trees although such simplified models are unable to reproduce the correct off-critical behaviour.Comment: 24 pages (plain TeX), 4 PostScript figures, uses psfig.st

    Approximation Algorithms for Correlated Knapsacks and Non-Martingale Bandits

    Full text link
    In the stochastic knapsack problem, we are given a knapsack of size B, and a set of jobs whose sizes and rewards are drawn from a known probability distribution. However, we know the actual size and reward only when the job completes. How should we schedule jobs to maximize the expected total reward? We know O(1)-approximations when we assume that (i) rewards and sizes are independent random variables, and (ii) we cannot prematurely cancel jobs. What can we say when either or both of these assumptions are changed? The stochastic knapsack problem is of interest in its own right, but techniques developed for it are applicable to other stochastic packing problems. Indeed, ideas for this problem have been useful for budgeted learning problems, where one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to pull the arms a total of B times to maximize the reward obtained. Much recent work on this problem focus on the case when the evolution of the arms follows a martingale, i.e., when the expected reward from the future is the same as the reward at the current state. What can we say when the rewards do not form a martingale? In this paper, we give constant-factor approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations, and also for budgeted learning problems where the martingale condition is not satisfied. Indeed, we can show that previously proposed LP relaxations have large integrality gaps. We propose new time-indexed LP relaxations, and convert the fractional solutions into distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise a randomized adaptive scheduling algorithm. We hope our LP formulation and decomposition methods may provide a new way to address other correlated bandit problems with more general contexts

    Protein docking refinement by convex underestimation in the low-dimensional subspace of encounter complexes

    Get PDF
    We propose a novel stochastic global optimization algorithm with applications to the refinement stage of protein docking prediction methods. Our approach can process conformations sampled from multiple clusters, each roughly corresponding to a different binding energy funnel. These clusters are obtained using a density-based clustering method. In each cluster, we identify a smooth “permissive” subspace which avoids high-energy barriers and then underestimate the binding energy function using general convex polynomials in this subspace. We use the underestimator to bias sampling towards its global minimum. Sampling and subspace underestimation are repeated several times and the conformations sampled at the last iteration form a refined ensemble. We report computational results on a comprehensive benchmark of 224 protein complexes, establishing that our refined ensemble significantly improves the quality of the conformations of the original set given to the algorithm. We also devise a method to enhance the ensemble from which near-native models are selected.Published versio

    Network recovery after massive failures

    Get PDF
    This paper addresses the problem of efficiently restoring sufficient resources in a communications network to support the demand of mission critical services after a large scale disruption. We give a formulation of the problem as an MILP and show that it is NP-hard. We propose a polynomial time heuristic, called Iterative Split and Prune (ISP) that decomposes the original problem recursively into smaller problems, until it determines the set of network components to be restored. We performed extensive simulations by varying the topologies, the demand intensity, the number of critical services, and the disruption model. Compared to several greedy approaches ISP performs better in terms of number of repaired components, and does not result in any demand loss. It performs very close to the optimal when the demand is low with respect to the supply network capacities, thanks to the ability of the algorithm to maximize sharing of repaired resources

    Distance-Dependent Kronecker Graphs for Modeling Social Networks

    Get PDF
    This paper focuses on a generalization of stochastic Kronecker graphs, introducing a Kronecker-like operator and defining a family of generator matrices H dependent on distances between nodes in a specified graph embedding. We prove that any lattice-based network model with sufficiently small distance-dependent connection probability will have a Poisson degree distribution and provide a general framework to prove searchability for such a network. Using this framework, we focus on a specific example of an expanding hypercube and discuss the similarities and differences of such a model with recently proposed network models based on a hidden metric space. We also prove that a greedy forwarding algorithm can find very short paths of length O((log log n)^2) on the hypercube with n nodes, demonstrating that distance-dependent Kronecker graphs can generate searchable network models

    On critical service recovery after massive network failures

    Get PDF
    This paper addresses the problem of efficiently restoring sufficient resources in a communications network to support the demand of mission critical services after a large-scale disruption. We give a formulation of the problem as a mixed integer linear programming and show that it is NP-hard. We propose a polynomial time heuristic, called iterative split and prune (ISP) that decomposes the original problem recursively into smaller problems, until it determines the set of network components to be restored. ISP's decisions are guided by the use of a new notion of demand-based centrality of nodes. We performed extensive simulations by varying the topologies, the demand intensity, the number of critical services, and the disruption model. Compared with several greedy approaches, ISP performs better in terms of total cost of repaired components, and does not result in any demand loss. It performs very close to the optimal when the demand is low with respect to the supply network capacities, thanks to the ability of the algorithm to maximize sharing of repaired resources
    • …
    corecore