1,908 research outputs found
Chip-firing may be much faster than you think
A new bound (Theorem \ref{thm:main}) for the duration of the chip-firing game
with chips on a -vertex graph is obtained, by a careful analysis of the
pseudo-inverse of the discrete Laplacian matrix of the graph. This new bound is
expressed in terms of the entries of the pseudo-inverse.
It is shown (Section 5) to be always better than the classic bound due to
Bj{\"o}rner, Lov\'{a}sz and Shor. In some cases the improvement is dramatic.
For instance: for strongly regular graphs the classic and the new bounds
reduce to and , respectively. For dense regular graphs -
- the classic and the new bounds reduce to
and , respectively.
This is a snapshot of a work in progress, so further results in this vein are
in the works
Cascading Failures in Power Grids - Analysis and Algorithms
This paper focuses on cascading line failures in the transmission system of
the power grid. Recent large-scale power outages demonstrated the limitations
of percolation- and epid- emic-based tools in modeling cascades. Hence, we
study cascades by using computational tools and a linearized power flow model.
We first obtain results regarding the Moore-Penrose pseudo-inverse of the power
grid admittance matrix. Based on these results, we study the impact of a single
line failure on the flows on other lines. We also illustrate via simulation the
impact of the distance and resistance distance on the flow increase following a
failure, and discuss the difference from the epidemic models. We then study the
cascade properties, considering metrics such as the distance between failures
and the fraction of demand (load) satisfied after the cascade (yield). We use
the pseudo-inverse of admittance matrix to develop an efficient algorithm to
identify the cascading failure evolution, which can be a building block for
cascade mitigation. Finally, we show that finding the set of lines whose
removal has the most significant impact (under various metrics) is NP-Hard and
introduce a simple heuristic for the minimum yield problem. Overall, the
results demonstrate that using the resistance distance and the pseudo-inverse
of admittance matrix provides important insights and can support the
development of efficient algorithms
M-Matrix Inverse problem for distance-regular graphs
We analyze when the MooreâPenrose inverse of the combinatorial Laplacian of a distanceâregular graph is a Mâmatrix;that is, it has nonâpositive offâdiagonal elements or, equivalently when the Moore-Penrose inverse of the combinatorial Laplacian of a distanceâregular graph is also the combinatorial Laplacian of another network. When this occurs we say that the distanceâregular graph has the Mâproperty. We prove that
only distanceâregular graphs with diameter up to three can have the Mâproperty and we give a characterization of the graphs
that satisfy the M-property in terms of their intersection array.
Moreover we exhaustively analyze the strongly regular graphs having the M-property and we give some families of distance regular graphs with diameter three that satisfy the M-property.Peer Reviewe
M-Matrix Inverse problem for distance-regular graphs
Postprint (published version
Mean first passage time for distance-biregular graphs
Postprint (published version
Sketch-based Randomized Algorithms for Dynamic Graph Regression
A well-known problem in data science and machine learning is {\em linear
regression}, which is recently extended to dynamic graphs. Existing exact
algorithms for updating the solution of dynamic graph regression problem
require at least a linear time (in terms of : the size of the graph).
However, this time complexity might be intractable in practice. In the current
paper, we utilize {\em subsampled randomized Hadamard transform} and
\textsf{CountSketch} to propose the first randomized algorithms. Suppose that
we are given an matrix embedding of the graph, where .
Let be the number of samples required for a guaranteed approximation error,
which is a sublinear function of . Our first algorithm reduces time
complexity of pre-processing to .
Then after an edge insertion or an edge deletion, it updates the approximate
solution in time. Our second algorithm reduces time complexity of
pre-processing to , where is the number of nonzero elements of . Then after
an edge insertion or an edge deletion or a node insertion or a node deletion,
it updates the approximate solution in time, with
. Finally, we show
that under some assumptions, if our first algorithm
outperforms our second algorithm and if our second
algorithm outperforms our first algorithm
- âŠ