26,544 research outputs found
The 4-Component Connectivity of Alternating Group Networks
The -component connectivity (or -connectivity for short) of a
graph , denoted by , is the minimum number of vertices whose
removal from results in a disconnected graph with at least
components or a graph with fewer than vertices. This generalization is a
natural extension of the classical connectivity defined in term of minimum
vertex-cut. As an application, the -connectivity can be used to assess
the vulnerability of a graph corresponding to the underlying topology of an
interconnection network, and thus is an important issue for reliability and
fault tolerance of the network. So far, only a little knowledge of results have
been known on -connectivity for particular classes of graphs and small
's. In a previous work, we studied the -connectivity on
-dimensional alternating group networks and obtained the result
for . In this sequel, we continue the work
and show that for
The Component Connectivity of Alternating Group Graphs and Split-Stars
For an integer , the -component connectivity of a
graph , denoted by , is the minimum number of vertices
whose removal from results in a disconnected graph with at least
components or a graph with fewer than vertices. This is a natural
generalization of the classical connectivity of graphs defined in term of the
minimum vertex-cut and is a good measure of robustness for the graph
corresponding to a network. So far, the exact values of -connectivity are
known only for a few classes of networks and small 's. It has been
pointed out in~[Component connectivity of the hypercubes, Int. J. Comput. Math.
89 (2012) 137--145] that determining -connectivity is still unsolved for
most interconnection networks, such as alternating group graphs and star
graphs. In this paper, by exploring the combinatorial properties and
fault-tolerance of the alternating group graphs and a variation of the
star graphs called split-stars , we study their -component
connectivities. We obtain the following results: (i) and
for , and for
; (ii) , , and
for
Relationship between Conditional Diagnosability and 2-extra Connectivity of Symmetric Graphs
The conditional diagnosability and the 2-extra connectivity are two important
parameters to measure ability of diagnosing faulty processors and
fault-tolerance in a multiprocessor system. The conditional diagnosability
of is the maximum number for which is conditionally
-diagnosable under the comparison model, while the 2-extra connectivity
of a graph is the minimum number for which there is a
vertex-cut with such that every component of has at least
vertices. A quite natural problem is what is the relationship between the
maximum and the minimum problem? This paper partially answer this problem by
proving for a regular graph with some acceptable
conditions. As applications, the conditional diagnosability and the 2-extra
connectivity are determined for some well-known classes of vertex-transitive
graphs, including, star graphs, -star graphs, alternating group
networks, -arrangement graphs, alternating group graphs, Cayley graphs
obtained from transposition generating trees, bubble-sort graphs, -ary
-cube networks and dual-cubes. Furthermore, many known results about these
networks are obtained directly
A Hierarchical Graphical Model for Big Inverse Covariance Estimation with an Application to fMRI
Brain networks has attracted the interests of many neuroscientists. From
functional MRI (fMRI) data, statistical tools have been developed to recover
brain networks. However, the dimensionality of whole-brain fMRI, usually in
hundreds of thousands, challenges the applicability of these methods. We
develop a hierarchical graphical model (HGM) to remediate this difficulty. This
model introduces a hidden layer of networks based on sparse Gaussian graphical
models, and the observed data are sampled from individual network nodes. In
fMRI, the network layer models the underlying signals of different brain
functional units, and how these units directly interact with each other. The
introduction of this hierarchical structure not only provides a formal and
interpretable approach, but also enables efficient computation for inferring
big networks with hundreds of thousands of nodes. Based on the conditional
convexity of our formulation, we develop an alternating update algorithm to
compute the HGM model parameters simultaneously. The effectiveness of this
approach is demonstrated on simulated data and a real dataset from a stop/go
fMRI experiment.Comment: An R package of the proposed method will be publicly available on
CRAN. This paper has been presented orally at Yale University on Feburary 18,
2014, and at the Eastern North American Region Meeting of the International
Biometric Society on March 18, 201
Network Response Regression for Modeling Population of Networks with Covariates
Multiple-subject network data are fast emerging in recent years, where a
separate network sample is measured over a common set of nodes for each
individual subject, along with subject covariates information. Most existing
network analysis methods have primarily focused on modeling a single network,
and are not directly applicable to modeling multiple network samples with
network-level covariates. In this article, we propose a new network response
regression model, where observed networks are treated as matrix-valued
responses and subject covariates as predictors. The new model characterizes the
population-level connectivity pattern through a low-rank intercept matrix, and
the parsimonious effects of subject covariates on the network through a sparse
slope tensor. We formulate the parameter estimation as a non-convex
optimization problem, and develop an efficient alternating gradient descent
algorithm. We establish the non-asymptotic error bound for the actual estimator
from our algorithm. Built upon this error bound, we derive the strong
consistency for network community recovery, as well as the edge selection
consistency. We demonstrate the efficacy of our method through intensive
simulations and two brain connectivity studies
Decoding the Encoding of Functional Brain Networks: an fMRI Classification Comparison of Non-negative Matrix Factorization (NMF), Independent Component Analysis (ICA), and Sparse Coding Algorithms
Brain networks in fMRI are typically identified using spatial independent
component analysis (ICA), yet mathematical constraints such as sparse coding
and positivity both provide alternate biologically-plausible frameworks for
generating brain networks. Non-negative Matrix Factorization (NMF) would
suppress negative BOLD signal by enforcing positivity. Spatial sparse coding
algorithms ( Regularized Learning and K-SVD) would impose local
specialization and a discouragement of multitasking, where the total observed
activity in a single voxel originates from a restricted number of possible
brain networks.
The assumptions of independence, positivity, and sparsity to encode
task-related brain networks are compared; the resulting brain networks for
different constraints are used as basis functions to encode the observed
functional activity at a given time point. These encodings are decoded using
machine learning to compare both the algorithms and their assumptions, using
the time series weights to predict whether a subject is viewing a video,
listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects.
For classifying cognitive activity, the sparse coding algorithm of
Regularized Learning consistently outperformed 4 variations of ICA across
different numbers of networks and noise levels (p0.001). The NMF algorithms,
which suppressed negative BOLD signal, had the poorest accuracy. Within each
algorithm, encodings using sparser spatial networks (containing more
zero-valued voxels) had higher classification accuracy (p0.001). The success
of sparse coding algorithms may suggest that algorithms which enforce sparse
coding, discourage multitasking, and promote local specialization may capture
better the underlying source processes than those which allow inexhaustible
local processes such as ICA
Estimating Differential Latent Variable Graphical Models with Applications to Brain Connectivity
Differential graphical models are designed to represent the difference
between the conditional dependence structures of two groups, thus are of
particular interest for scientific investigation. Motivated by modern
applications, this manuscript considers an extended setting where each group is
generated by a latent variable Gaussian graphical model. Due to the existence
of latent factors, the differential network is decomposed into sparse and
low-rank components, both of which are symmetric indefinite matrices. We
estimate these two components simultaneously using a two-stage procedure: (i)
an initialization stage, which computes a simple, consistent estimator, and
(ii) a convergence stage, implemented using a projected alternating gradient
descent algorithm applied to a nonconvex objective, initialized using the
output of the first stage. We prove that given the initialization, the
estimator converges linearly with a nontrivial, minimax optimal statistical
error. Experiments on synthetic and real data illustrate that the proposed
nonconvex procedure outperforms existing methods.Comment: 60 page
Scalable Spectral Algorithms for Community Detection in Directed Networks
Community detection has been one of the central problems in network studies
and directed network is particularly challenging due to asymmetry among its
links. In this paper, we found that incorporating the direction of links
reveals new perspectives on communities regarding to two different roles,
source and terminal, that a node plays in each community. Intriguingly, such
communities appear to be connected with unique spectral property of the graph
Laplacian of the adjacency matrix and we exploit this connection by using
regularized SVD methods. We propose harvesting algorithms, coupled with
regularized SVDs, that are linearly scalable for efficient identification of
communities in huge directed networks. The proposed algorithm shows great
performance and scalability on benchmark networks in simulations and
successfully recovers communities in real network applications.Comment: Single column, 40 pages, 6 figures and 7 table
A kind of conditional connectivity of transposition networks generated by -trees
For a graph , a subset is called an
-vertex-cut of if is disconnected and each vertex has at least neighbors in . The -vertex-connectivity of ,
denoted by , is the cardinality of the minimum -vertex-cut of
, which is a refined measure for the fault tolerance of network . In this
paper, we study for Cayley graphs generated by -trees. Let
be the symmetric group on and be a
set of transpositions of . Let be the graph on
vertices such that there is an edge in
if and only if the transposition . The
graph is called the transposition generating graph of
. We denote by the Cayley graph
generated by . The Cayley graph is
denoted by if is a -tree. We determine
in this work. The trees are -trees, and the complete
graph on vertices is a -tree. Thus, in this sense, this work is a
generalization of the such results on Cayley graphs generated by transposition
generating trees and the complete-transposition graphs.Comment: 11pages,2figure
Incorporating Prior Information with Fused Sparse Group Lasso: Application to Prediction of Clinical Measures from Neuroimages
Predicting clinical variables from whole-brain neuroimages is a high
dimensional problem that requires some type of feature selection or extraction.
Penalized regression is a popular embedded feature selection method for high
dimensional data. For neuroimaging applications, spatial regularization using
the or norm of the image gradient has shown good performance,
yielding smooth solutions in spatially contiguous brain regions. However,
recently enormous resources have been devoted to establishing structural and
functional brain connectivity networks that can be used to define spatially
distributed yet related groups of voxels. We propose using the fused sparse
group lasso penalty to encourage structured, sparse, interpretable solutions by
incorporating prior information about spatial and group structure among voxels.
We present optimization steps for fused sparse group lasso penalized regression
using the alternating direction method of multipliers algorithm. With
simulation studies and in application to real fMRI data from the Autism Brain
Imaging Data Exchange, we demonstrate conditions under which fusion and group
penalty terms together outperform either of them alone. Supplementary materials
for this article are available online.Comment: 36 pages, 6 figures; expanded author's footnote; revised simulation
study results (Figures 2 to 4, Table 2, conclusions unchanged); revised ABIDE
Application results (Table 3, Figure 6, conclusions unchanged
- β¦