351 research outputs found
Power of individuals -- Controlling centrality of temporal networks
Temporal networks are such networks where nodes and interactions may appear
and disappear at various time scales. With the evidence of ubiquity of temporal
networks in our economy, nature and society, it's urgent and significant to
focus on structural controllability of temporal networks, which nowadays is
still an untouched topic. We develop graphic tools to study the structural
controllability of temporal networks, identifying the intrinsic mechanism of
the ability of individuals in controlling a dynamic and large-scale temporal
network. Classifying temporal trees of a temporal network into different types,
we give (both upper and lower) analytical bounds of the controlling centrality,
which are verified by numerical simulations of both artificial and empirical
temporal networks. We find that the scale-free distribution of node's
controlling centrality is virtually independent of the time scale and types of
datasets, meaning the inherent heterogeneity and robustness of the controlling
centrality of temporal networks
A Capsule-unified Framework of Deep Neural Networks for Graphical Programming
Recently, the growth of deep learning has produced a large number of deep
neural networks. How to describe these networks unifiedly is becoming an
important issue. We first formalize neural networks in a mathematical
definition, give their directed graph representations, and prove a generation
theorem about the induced networks of connected directed acyclic graphs. Then,
using the concept of capsule to extend neural networks, we set up a
capsule-unified framework for deep learning, including a mathematical
definition of capsules, an induced model for capsule networks and a universal
backpropagation algorithm for training them. Finally, we discuss potential
applications of the framework to graphical programming with standard graphical
symbols of capsules, neurons, and connections.Comment: 20 pages; 26 figures. arXiv admin note: text overlap with
arXiv:1805.0355
A generalized super integrable hierarchy of Dirac type
In this letter, a new generalized matrix spectral problem of Dirac type
associated with the super Lie algebra is proposed and its
corresponding super integrable hierarchy is constructed.Comment: 7 page
Theory of Cognitive Relativity: A Promising Paradigm for True AI
The rise of deep learning has brought artificial intelligence (AI) to the
forefront. The ultimate goal of AI is to realize machines with human mind and
consciousness, but existing achievements mainly simulate intelligent behavior
on computer platforms. These achievements all belong to weak AI rather than
strong AI. How to achieve strong AI is not known yet in the field of
intelligence science. Currently, this field is calling for a new paradigm,
especially Theory of Cognitive Relativity (TCR). The TCR aims to summarize a
simple and elegant set of first principles about the nature of intelligence, at
least including the Principle of World's Relativity and the Principle of
Symbol's Relativity. The Principle of World's Relativity states that the
subjective world an intelligent agent can observe is strongly constrained by
the way it perceives the objective world. The Principle of Symbol's Relativity
states that an intelligent agent can use any physical symbol system to express
what it observes in its subjective world. The two principles are derived from
scientific facts and life experience. Thought experiments show that they are
important to understand high-level intelligence and necessary to establish a
scientific theory of mind and consciousness. Rather than brain-like
intelligence, the TCR indeed advocates a promising change in direction to
realize true AI, i.e. artificial general intelligence or artificial
consciousness, particularly different from humans' and animals'. Furthermore, a
TCR creed has been presented and extended to reveal the secrets of
consciousness and to guide realization of conscious machines. In the sense that
true AI could be diversely implemented in a brain-different way, the TCR would
probably drive an intelligence revolution in combination with some additional
first principles.Comment: 38 pages (double spaced), 8 figure
Well-Conditioned Fractional Collocation Methods Using Fractional Birkhoff Interpolation Basis
The purpose of this paper is twofold. Firstly, we provide explicit and
compact formulas for computing both Caputo and (modified) Riemann-Liouville
(RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order
at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case,
it suffices to compute F-PSDM of order to compute that of any
order with integer while in the modified RL case, it is only
necessary to evaluate a fractional integral matrix of order
Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems
leading to new interpolation polynomial basis functions with remarkable
properties: (i) the matrix generated from the new basis yields the exact
inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest
fractional derivative in a collocation scheme under the new basis is diagonal;
and (iii) the resulted linear system is well-conditioned in the Caputo case,
while in the modified RL case, the eigenvalues of the coefficient matrix are
highly concentrated. In both cases, the linear systems of the collocation
schemes using the new basis can solved by an iterative solver within a few
iterations. Notably, the inverse can be computed in a very stable manner, so
this offers optimal preconditioners for usual fractional collocation methods
for fractional differential equations (FDEs). It is also noteworthy that the
choice of certain special JGL points with parameters related to the order of
the equations can ease the implementation. We highlight that the use of the
Bateman's fractional integral formulas and fast transforms between Jacobi
polynomials with different parameters, are essential for our algorithm
development.Comment: 30 pages, 10 figures and 1 tabl
A concatenating framework of shortcut convolutional neural networks
It is well accepted that convolutional neural networks play an important role
in learning excellent features for image classification and recognition.
However, in tradition they only allow adjacent layers connected, limiting
integration of multi-scale information. To further improve their performance,
we present a concatenating framework of shortcut convolutional neural networks.
This framework can concatenate multi-scale features by shortcut connections to
the fully-connected layer that is directly fed to the output layer. We do a
large number of experiments to investigate performance of the shortcut
convolutional neural networks on many benchmark visual datasets for different
tasks. The datasets include AR, FERET, FaceScrub, CelebA for gender
classification, CUReT for texture classification, MNIST for digit recognition,
and CIFAR-10 for object recognition. Experimental results show that the
shortcut convolutional neural networks can achieve better results than the
traditional ones on these tasks, with more stability in different settings of
pooling schemes, activation functions, optimizations, initializations, kernel
numbers and kernel sizes.Comment: 17 pages, 5 figures, 15 table
A Unified Framework of Deep Neural Networks by Capsules
With the growth of deep learning, how to describe deep neural networks
unifiedly is becoming an important issue. We first formalize neural networks
mathematically with their directed graph representations, and prove a
generation theorem about the induced networks of connected directed acyclic
graphs. Then, we set up a unified framework for deep learning with capsule
networks. This capsule framework could simplify the description of existing
deep neural networks, and provide a theoretical basis of graphic designing and
programming techniques for deep learning models, thus would be of great
significance to the advancement of deep learning.Comment: 9 pages, 12 figure
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks
Channel pruning can significantly accelerate and compress deep neural
networks. Many channel pruning works utilize structured sparsity regularization
to zero out all the weights in some channels and automatically obtain
structure-sparse network in training stage. However, these methods apply
structured sparsity regularization on each layer separately where the
correlations between consecutive layers are omitted. In this paper, we first
combine one out-channel in current layer and the corresponding in-channel in
next layer as a regularization group, namely out-in-channel. Our proposed
Out-In-Channel Sparsity Regularization (OICSR) considers correlations between
successive layers to further retain predictive power of the compact network.
Training with OICSR thoroughly transfers discriminative features into a
fraction of out-in-channels. Correspondingly, OICSR measures channel importance
based on statistics computed from two consecutive layers, not individual layer.
Finally, a global greedy pruning algorithm is designed to remove redundant
out-in-channels in an iterative way. Our method is comprehensively evaluated
with various CNN architectures including CifarNet, AlexNet, ResNet, DenseNet
and PreActSeNet on CIFAR-10, CIFAR-100 and ImageNet-1K datasets. Notably, on
ImageNet-1K, we reduce 37.2% FLOPs on ResNet-50 while outperforming the
original model by 0.22% top-1 accuracy.Comment: Accepted to CVPR 2019, the pruned ResNet-50 model has be released at:
https://github.com/dsfour/OICS
Estimation of genomic characteristics by analyzing k-mer frequency in de novo genome projects
Background: With the fast development of next generation sequencing
technologies, increasing numbers of genomes are being de novo sequenced and
assembled. However, most are in fragmental and incomplete draft status, and
thus it is often difficult to know the accurate genome size and repeat content.
Furthermore, many genomes are highly repetitive or heterozygous, posing
problems to current assemblers utilizing short reads. Therefore, it is
necessary to develop efficient assembly-independent methods for accurate
estimation of these genomic characteristics. Results: Here we present a
framework for modeling the distribution of k-mer frequency from sequencing data
and estimating the genomic characteristics such as genome size, repeat
structure and heterozygous rate. By introducing novel techniques of k-mer
individuals, float precision estimation, and proper treatment of sequencing
error and coverage bias, the estimation accuracy of our method is significantly
improved over existing methods. We also studied how the various genomic and
sequencing characteristics affect the estimation accuracy using simulated
sequencing data, and discussed the limitations on applying our method to real
sequencing data. Conclusion: Based on this research, we show that the k-mer
frequency analysis can be used as a general and assembly-independent method for
estimating genomic characteristics, which can improve our understanding of a
species genome, help design the sequencing strategy of genome projects, and
guide the development of assembly algorithms. The programs developed in this
research are written using C/C++, and freely accessible at Github URL
(https://github.com/fanagislab/GCE) or BGI ftp (
ftp://ftp.genomics.org.cn/pub/gce).Comment: In total, 47 pages include maintext and supplemental. 7 maintext
figures, 3 tables, 6 supplemental figures, 5 supplemental table
Entropy Guided Spectrum Based Bug Localization Using Statistical Language Model
Locating bugs is challenging but one of the most important activities in
software development and maintenance phase because there are no certain rules
to identify all types of bugs. Existing automatic bug localization tools use
various heuristics based on test coverage, pre-determined buggy patterns, or
textual similarity with bug report, to rank suspicious program elements.
However, since these techniques rely on information from single source, they
often suffer when the respective source information is inadequate. For
instance, the popular spectrum based bug localization may not work well under
poorly written test suite. In this paper, we propose a new approach, EnSpec,
that guides spectrum based bug localization using code entropy, a metric that
basically represents naturalness of code derived from a statistical language
model. Our intuition is that since buggy code are high entropic, spectrum based
bug localization with code entropy would be more robust in discriminating buggy
lines vs. non-buggy lines. We realize our idea in a prototype, and performed an
extensive evaluation on two popular publicly available benchmarks. Our results
demonstrate that EnSpec outperforms a state-of-the-art spectrum based bug
localization technique.Comment: 13 page
- …