641,038 research outputs found
Hierarchy of Gene Expression Data is Predictive of Future Breast Cancer Outcome
We calculate measures of hierarchy in gene and tissue networks of breast
cancer patients. We find that the likelihood of metastasis in the future is
correlated with increased values of network hierarchy for expression networks
of cancer-associated genes, due to correlated expression of cancer-specific
pathways. Conversely, future metastasis and quick relapse times are negatively
correlated with values of network hierarchy in the expression network of all
genes, due to dedifferentiation of gene pathways and circuits. These results
suggest that hierarchy of gene expression may be useful as an additional
biomarker for breast cancer prognosis.Comment: 14 pages, 5 figure
Network hierarchy evolution and system vulnerability in power grids
(c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.The seldom addressed network hierarchy property and its relationship with vulnerability analysis for power transmission grids from a complex-systems point of view are given in this paper. We analyze and compare the evolution of network hierarchy for the dynamic vulnerability evaluation of four different power transmission grids of real cases. Several meaningful results suggest that the vulnerability of power grids can be assessed by means of a network hierarchy evolution analysis. First, the network hierarchy evolution may be used as a novel measurement to quantify the robustness of power grids. Second, an antipyramidal structure appears in the most robust network when quantifying cascading failures by the proposed hierarchy metric. Furthermore, the analysis results are also validated and proved by empirical reliability data. We show that our proposed hierarchy evolution analysis methodology could be used to assess the vulnerability of power grids or even other networks from a complex-systems point of view.Peer ReviewedPostprint (author's final draft
A hierarchy of topological tensor network states
We present a hierarchy of quantum many-body states among which many examples
of topological order can be identified by construction. We define these states
in terms of a general, basis-independent framework of tensor networks based on
the algebraic setting of finite-dimensional Hopf C*-algebras. At the top of the
hierarchy we identify ground states of new topological lattice models extending
Kitaev's quantum double models [26]. For these states we exhibit the mechanism
responsible for their non-zero topological entanglement entropy by constructing
a renormalization group flow. Furthermore it is shown that those states of the
hierarchy associated with Kitaev's original quantum double models are related
to each other by the condensation of topological charges. We conjecture that
charge condensation is the physical mechanism underlying the hierarchy in
general.Comment: 61 page
CAROLINE LEVINE. Forms: Whole, Rhythm, Hierarchy, Network
Review of English Studies 66 (2015), 1001-
Hierarchy and Polysynchrony in an adaptive network
We describe a simple adaptive network of coupled chaotic maps. The network
reaches a stationary state (frozen topology) for all values of the coupling
parameter, although the dynamics of the maps at the nodes of the network can be
non-trivial. The structure of the network shows interesting hierarchical
properties and in certain parameter regions the dynamics is polysynchronous:
nodes can be divided in differently synchronized classes but contrary to
cluster synchronization, nodes in the same class need not be connected to each
other. These complicated synchrony patterns have been conjectured to play roles
in systems biology and circuits. The adaptive system we study describes ways
whereby this behaviour can evolve from undifferentiated nodes.Comment: 13 pages, 17 figure
From neural PCA to deep unsupervised learning
A network supporting deep unsupervised learning is presented. The network is
an autoencoder with lateral shortcut connections from the encoder to decoder at
each level of the hierarchy. The lateral shortcut connections allow the higher
levels of the hierarchy to focus on abstract invariant features. While standard
autoencoders are analogous to latent variable models with a single layer of
stochastic variables, the proposed network is analogous to hierarchical latent
variables models. Learning combines denoising autoencoder and denoising sources
separation frameworks. Each layer of the network contributes to the cost
function a term which measures the distance of the representations produced by
the encoder and the decoder. Since training signals originate from all levels
of the network, all layers can learn efficiently even in deep networks. The
speedup offered by cost terms from higher levels of the hierarchy and the
ability to learn invariant features are demonstrated in experiments.Comment: A revised version of an article that has been accepted for
publication in Advances in Independent Component Analysis and Learning
Machines (2015), edited by Ella Bingham, Samuel Kaski, Jorma Laaksonen and
Jouko Lampine
Deep Virtual Networks for Memory Efficient Inference of Multiple Tasks
Deep networks consume a large amount of memory by their nature. A natural
question arises can we reduce that memory requirement whilst maintaining
performance. In particular, in this work we address the problem of memory
efficient learning for multiple tasks. To this end, we propose a novel network
architecture producing multiple networks of different configurations, termed
deep virtual networks (DVNs), for different tasks. Each DVN is specialized for
a single task and structured hierarchically. The hierarchical structure, which
contains multiple levels of hierarchy corresponding to different numbers of
parameters, enables multiple inference for different memory budgets. The
building block of a deep virtual network is based on a disjoint collection of
parameters of a network, which we call a unit. The lowest level of hierarchy in
a deep virtual network is a unit, and higher levels of hierarchy contain lower
levels' units and other additional units. Given a budget on the number of
parameters, a different level of a deep virtual network can be chosen to
perform the task. A unit can be shared by different DVNs, allowing multiple
DVNs in a single network. In addition, shared units provide assistance to the
target task with additional knowledge learned from another tasks. This
cooperative configuration of DVNs makes it possible to handle different tasks
in a memory-aware manner. Our experiments show that the proposed method
outperforms existing approaches for multiple tasks. Notably, ours is more
efficient than others as it allows memory-aware inference for all tasks.Comment: CVPR 201
- …
