26,544 research outputs found

    The 4-Component Connectivity of Alternating Group Networks

    Full text link
    The β„“\ell-component connectivity (or β„“\ell-connectivity for short) of a graph GG, denoted by ΞΊβ„“(G)\kappa_\ell(G), is the minimum number of vertices whose removal from GG results in a disconnected graph with at least β„“\ell components or a graph with fewer than β„“\ell vertices. This generalization is a natural extension of the classical connectivity defined in term of minimum vertex-cut. As an application, the β„“\ell-connectivity can be used to assess the vulnerability of a graph corresponding to the underlying topology of an interconnection network, and thus is an important issue for reliability and fault tolerance of the network. So far, only a little knowledge of results have been known on β„“\ell-connectivity for particular classes of graphs and small β„“\ell's. In a previous work, we studied the β„“\ell-connectivity on nn-dimensional alternating group networks ANnAN_n and obtained the result ΞΊ3(ANn)=2nβˆ’3\kappa_3(AN_n)=2n-3 for nβ©Ύ4n\geqslant 4. In this sequel, we continue the work and show that ΞΊ4(ANn)=3nβˆ’6\kappa_4(AN_n)=3n-6 for nβ©Ύ4n\geqslant 4

    The Component Connectivity of Alternating Group Graphs and Split-Stars

    Full text link
    For an integer β„“β©Ύ2\ell\geqslant 2, the β„“\ell-component connectivity of a graph GG, denoted by ΞΊβ„“(G)\kappa_{\ell}(G), is the minimum number of vertices whose removal from GG results in a disconnected graph with at least β„“\ell components or a graph with fewer than β„“\ell vertices. This is a natural generalization of the classical connectivity of graphs defined in term of the minimum vertex-cut and is a good measure of robustness for the graph corresponding to a network. So far, the exact values of β„“\ell-connectivity are known only for a few classes of networks and small β„“\ell's. It has been pointed out in~[Component connectivity of the hypercubes, Int. J. Comput. Math. 89 (2012) 137--145] that determining β„“\ell-connectivity is still unsolved for most interconnection networks, such as alternating group graphs and star graphs. In this paper, by exploring the combinatorial properties and fault-tolerance of the alternating group graphs AGnAG_n and a variation of the star graphs called split-stars Sn2S_n^2, we study their β„“\ell-component connectivities. We obtain the following results: (i) ΞΊ3(AGn)=4nβˆ’10\kappa_3(AG_n)=4n-10 and ΞΊ4(AGn)=6nβˆ’16\kappa_4(AG_n)=6n-16 for nβ©Ύ4n\geqslant 4, and ΞΊ5(AGn)=8nβˆ’24\kappa_5(AG_n)=8n-24 for nβ©Ύ5n\geqslant 5; (ii) ΞΊ3(Sn2)=4nβˆ’8\kappa_3(S_n^2)=4n-8, ΞΊ4(Sn2)=6nβˆ’14\kappa_4(S_n^2)=6n-14, and ΞΊ5(Sn2)=8nβˆ’20\kappa_5(S_n^2)=8n-20 for nβ©Ύ4n\geqslant 4

    Relationship between Conditional Diagnosability and 2-extra Connectivity of Symmetric Graphs

    Full text link
    The conditional diagnosability and the 2-extra connectivity are two important parameters to measure ability of diagnosing faulty processors and fault-tolerance in a multiprocessor system. The conditional diagnosability tc(G)t_c(G) of GG is the maximum number tt for which GG is conditionally tt-diagnosable under the comparison model, while the 2-extra connectivity ΞΊ2(G)\kappa_2(G) of a graph GG is the minimum number kk for which there is a vertex-cut FF with ∣F∣=k|F|=k such that every component of Gβˆ’FG-F has at least 33 vertices. A quite natural problem is what is the relationship between the maximum and the minimum problem? This paper partially answer this problem by proving tc(G)=ΞΊ2(G)t_c(G)=\kappa_2(G) for a regular graph GG with some acceptable conditions. As applications, the conditional diagnosability and the 2-extra connectivity are determined for some well-known classes of vertex-transitive graphs, including, star graphs, (n,k)(n,k)-star graphs, alternating group networks, (n,k)(n,k)-arrangement graphs, alternating group graphs, Cayley graphs obtained from transposition generating trees, bubble-sort graphs, kk-ary nn-cube networks and dual-cubes. Furthermore, many known results about these networks are obtained directly

    A Hierarchical Graphical Model for Big Inverse Covariance Estimation with an Application to fMRI

    Full text link
    Brain networks has attracted the interests of many neuroscientists. From functional MRI (fMRI) data, statistical tools have been developed to recover brain networks. However, the dimensionality of whole-brain fMRI, usually in hundreds of thousands, challenges the applicability of these methods. We develop a hierarchical graphical model (HGM) to remediate this difficulty. This model introduces a hidden layer of networks based on sparse Gaussian graphical models, and the observed data are sampled from individual network nodes. In fMRI, the network layer models the underlying signals of different brain functional units, and how these units directly interact with each other. The introduction of this hierarchical structure not only provides a formal and interpretable approach, but also enables efficient computation for inferring big networks with hundreds of thousands of nodes. Based on the conditional convexity of our formulation, we develop an alternating update algorithm to compute the HGM model parameters simultaneously. The effectiveness of this approach is demonstrated on simulated data and a real dataset from a stop/go fMRI experiment.Comment: An R package of the proposed method will be publicly available on CRAN. This paper has been presented orally at Yale University on Feburary 18, 2014, and at the Eastern North American Region Meeting of the International Biometric Society on March 18, 201

    Network Response Regression for Modeling Population of Networks with Covariates

    Full text link
    Multiple-subject network data are fast emerging in recent years, where a separate network sample is measured over a common set of nodes for each individual subject, along with subject covariates information. Most existing network analysis methods have primarily focused on modeling a single network, and are not directly applicable to modeling multiple network samples with network-level covariates. In this article, we propose a new network response regression model, where observed networks are treated as matrix-valued responses and subject covariates as predictors. The new model characterizes the population-level connectivity pattern through a low-rank intercept matrix, and the parsimonious effects of subject covariates on the network through a sparse slope tensor. We formulate the parameter estimation as a non-convex optimization problem, and develop an efficient alternating gradient descent algorithm. We establish the non-asymptotic error bound for the actual estimator from our algorithm. Built upon this error bound, we derive the strong consistency for network community recovery, as well as the edge selection consistency. We demonstrate the efficacy of our method through intensive simulations and two brain connectivity studies

    Decoding the Encoding of Functional Brain Networks: an fMRI Classification Comparison of Non-negative Matrix Factorization (NMF), Independent Component Analysis (ICA), and Sparse Coding Algorithms

    Full text link
    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet mathematical constraints such as sparse coding and positivity both provide alternate biologically-plausible frameworks for generating brain networks. Non-negative Matrix Factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks for different constraints are used as basis functions to encode the observed functional activity at a given time point. These encodings are decoded using machine learning to compare both the algorithms and their assumptions, using the time series weights to predict whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. For classifying cognitive activity, the sparse coding algorithm of L1L1 Regularized Learning consistently outperformed 4 variations of ICA across different numbers of networks and noise levels (p<<0.001). The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy. Within each algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<<0.001). The success of sparse coding algorithms may suggest that algorithms which enforce sparse coding, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA

    Estimating Differential Latent Variable Graphical Models with Applications to Brain Connectivity

    Full text link
    Differential graphical models are designed to represent the difference between the conditional dependence structures of two groups, thus are of particular interest for scientific investigation. Motivated by modern applications, this manuscript considers an extended setting where each group is generated by a latent variable Gaussian graphical model. Due to the existence of latent factors, the differential network is decomposed into sparse and low-rank components, both of which are symmetric indefinite matrices. We estimate these two components simultaneously using a two-stage procedure: (i) an initialization stage, which computes a simple, consistent estimator, and (ii) a convergence stage, implemented using a projected alternating gradient descent algorithm applied to a nonconvex objective, initialized using the output of the first stage. We prove that given the initialization, the estimator converges linearly with a nontrivial, minimax optimal statistical error. Experiments on synthetic and real data illustrate that the proposed nonconvex procedure outperforms existing methods.Comment: 60 page

    Scalable Spectral Algorithms for Community Detection in Directed Networks

    Full text link
    Community detection has been one of the central problems in network studies and directed network is particularly challenging due to asymmetry among its links. In this paper, we found that incorporating the direction of links reveals new perspectives on communities regarding to two different roles, source and terminal, that a node plays in each community. Intriguingly, such communities appear to be connected with unique spectral property of the graph Laplacian of the adjacency matrix and we exploit this connection by using regularized SVD methods. We propose harvesting algorithms, coupled with regularized SVDs, that are linearly scalable for efficient identification of communities in huge directed networks. The proposed algorithm shows great performance and scalability on benchmark networks in simulations and successfully recovers communities in real network applications.Comment: Single column, 40 pages, 6 figures and 7 table

    A kind of conditional connectivity of transposition networks generated by kk-trees

    Full text link
    For a graph G=(V,E)G = (V, E), a subset FβŠ‚V(G)F\subset V(G) is called an RkR_k-vertex-cut of GG if Gβˆ’FG -F is disconnected and each vertex u∈V(G)βˆ’Fu \in V(G)- F has at least kk neighbors in Gβˆ’FG -F. The RkR_k-vertex-connectivity of GG, denoted by ΞΊk(G)\kappa^k(G), is the cardinality of the minimum RkR_k-vertex-cut of GG, which is a refined measure for the fault tolerance of network GG. In this paper, we study ΞΊ2\kappa^2 for Cayley graphs generated by kk-trees. Let Sym(n)Sym(n) be the symmetric group on {1,2,⋯ ,n}\{1, 2, \cdots ,n\} and T\mathcal{T} be a set of transpositions of Sym(n)Sym(n). Let G(T)G(\mathcal{T}) be the graph on nn vertices {1,2,...,n}\{1, 2, . . . ,n\} such that there is an edge ijij in G(T)G(\mathcal{T}) if and only if the transposition ij∈Tij\in \mathcal{T}. The graph G(T)G(\mathcal{T}) is called the transposition generating graph of T\mathcal{T}. We denote by Cay(Sym(n),T)Cay(Sym(n),\mathcal{T}) the Cayley graph generated by G(T)G(\mathcal{T}). The Cayley graph Cay(Sym(n),T)Cay(Sym(n),\mathcal{T}) is denoted by TkGnT_kG_n if G(T)G(\mathcal{T}) is a kk-tree. We determine ΞΊ2(TkGn)\kappa^2(T_kG_n) in this work. The trees are 11-trees, and the complete graph on nn vertices is a nβˆ’1n-1-tree. Thus, in this sense, this work is a generalization of the such results on Cayley graphs generated by transposition generating trees and the complete-transposition graphs.Comment: 11pages,2figure

    Incorporating Prior Information with Fused Sparse Group Lasso: Application to Prediction of Clinical Measures from Neuroimages

    Full text link
    Predicting clinical variables from whole-brain neuroimages is a high dimensional problem that requires some type of feature selection or extraction. Penalized regression is a popular embedded feature selection method for high dimensional data. For neuroimaging applications, spatial regularization using the β„“1\ell_1 or β„“2\ell_2 norm of the image gradient has shown good performance, yielding smooth solutions in spatially contiguous brain regions. However, recently enormous resources have been devoted to establishing structural and functional brain connectivity networks that can be used to define spatially distributed yet related groups of voxels. We propose using the fused sparse group lasso penalty to encourage structured, sparse, interpretable solutions by incorporating prior information about spatial and group structure among voxels. We present optimization steps for fused sparse group lasso penalized regression using the alternating direction method of multipliers algorithm. With simulation studies and in application to real fMRI data from the Autism Brain Imaging Data Exchange, we demonstrate conditions under which fusion and group penalty terms together outperform either of them alone. Supplementary materials for this article are available online.Comment: 36 pages, 6 figures; expanded author's footnote; revised simulation study results (Figures 2 to 4, Table 2, conclusions unchanged); revised ABIDE Application results (Table 3, Figure 6, conclusions unchanged
    • …
    corecore