7 research outputs found

    Spectral Bounds for the Connectivity of Regular Graphs with Given Order

    Get PDF
    The second-largest eigenvalue and second-smallest Laplacian eigenvalue of a graph are measures of its connectivity. These eigenvalues can be used to analyze the robustness, resilience, and synchronizability of networks, and are related to connectivity attributes such as the vertex- and edge-connectivity, isoperimetric number, and characteristic path length. In this paper, we present two upper bounds for the second-largest eigenvalues of regular graphs and multigraphs of a given order which guarantee a desired vertex- or edge-connectivity. The given bounds are in terms of the order and degree of the graphs, and hold with equality for infinite families of graphs. These results answer a question of Mohar.Comment: 24 page

    Cavity Matchings, Label Compressions, and Unrooted Evolutionary Trees

    Get PDF
    We present an algorithm for computing a maximum agreement subtree of two unrooted evolutionary trees. It takes O(n^{1.5} log n) time for trees with unbounded degrees, matching the best known time complexity for the rooted case. Our algorithm allows the input trees to be mixed trees, i.e., trees that may contain directed and undirected edges at the same time. Our algorithm adopts a recursive strategy exploiting a technique called label compression. The backbone of this technique is an algorithm that computes the maximum weight matchings over many subgraphs of a bipartite graph as fast as it takes to compute a single matching

    Edge-connectivity augmentation of a graph

    Get PDF
    In this thesis we consider the edge-connectivity augmentation problem. In the first part of the thesis we present a cactus representation of a graph and describe its construction for which we present an algorithm. In the second part of the thesis we consider the relation between edge-connectivity of a graph and its cactus representation. Using this relation we give a lower bound for the least number of edges to be added to increase the edge-connectivity of a graph by one. We also prove that the lower bound is always achievable. Then we give an algorithm for edge-connectivity augmentation by one by applying properties of the cycle-type normal cactus representation. In the third part of the thesis we present general edge splitting method which is used in Frank's algorithm for solving edge-connectivity augmentation problem. We also prove Mader's theorem which is needed to prove finiteness of edge splitting in Frank's algorithm

    Suppressing microdata to prevent classification based inference

    Get PDF
    The revolution of Internet together with the progression in computer technology makes it easy for institutions to collect unprecedented amount of personal data. This pervasive data collection rally coupled with the increasing necessity of sharing of it raised a lot of concerns about privacy. Widespread usage of data mining techniques, enabling institutions to extract previously unknown and strategically useful information from huge collections of data sets, and thus gain competitive advantages, has also contributed to the fears about privacy. One method to ensure privacy during disclosure is to selectively hide or generalize the confidential information. However, with data mining techniques it is now possible for an adversary to predict hidden or generalized confidential information using the rest of the disclosed data set. We concentrate on one such possible threat, classification, which is a data mining technique widely used for prediction purposes, and propose algorithms that modify a given microdata set either by inserting unknown values (i.e. deletion) or by generalizing the original values to prevent both probabilistic and decision tree classification based inference. To evaluate the proposed algorithms we experiment with real-life data sets. Results show that proposed algorithms successfully suppress microdata and prevent both probabilistic and decision tree classification based inference. The hybrid versions of the algorithms, which aim to suppress a confidential data value against both classification models, block the inference channels with substantially less side effects. Similarly, the enhanced versions of the algorithms, which aim to suppress multiple confidential data values, reduce the side effects by nearly 50%
    corecore