42,114 research outputs found
Signatures of small-world and scale-free properties in large computer programs
A large computer program is typically divided into many hundreds or even
thousands of smaller units, whose logical connections define a network in a
natural way. This network reflects the internal structure of the program, and
defines the ``information flow'' within the program. We show that, (1) due to
its growth in time this network displays a scale-free feature in that the
probability of the number of links at a node obeys a power-law distribution,
and (2) as a result of performance optimization of the program the network has
a small-world structure. We believe that these features are generic for large
computer programs. Our work extends the previous studies on growing networks,
which have mostly been for physical networks, to the domain of computer
software.Comment: 4 pages, 1 figure, to appear in Phys. Rev.
Numerical Investigation of Graph Spectra and Information Interpretability of Eigenvalues
We undertake an extensive numerical investigation of the graph spectra of
thousands regular graphs, a set of random Erd\"os-R\'enyi graphs, the two most
popular types of complex networks and an evolving genetic network by using
novel conceptual and experimental tools. Our objective in so doing is to
contribute to an understanding of the meaning of the Eigenvalues of a graph
relative to its topological and information-theoretic properties. We introduce
a technique for identifying the most informative Eigenvalues of evolving
networks by comparing graph spectra behavior to their algorithmic complexity.
We suggest that extending techniques can be used to further investigate the
behavior of evolving biological networks. In the extended version of this paper
we apply these techniques to seven tissue specific regulatory networks as
static example and network of a na\"ive pluripotent immune cell in the process
of differentiating towards a Th17 cell as evolving example, finding the most
and least informative Eigenvalues at every stage.Comment: Forthcoming in 3rd International Work-Conference on Bioinformatics
and Biomedical Engineering (IWBBIO), Lecture Notes in Bioinformatics, 201
Nuclear Theory and Science of the Facility for Rare Isotope Beams
The Facility for Rare Isotope Beams (FRIB) will be a world-leading laboratory
for the study of nuclear structure, reactions and astrophysics. Experiments
with intense beams of rare isotopes produced at FRIB will guide us toward a
comprehensive description of nuclei, elucidate the origin of the elements in
the cosmos, help provide an understanding of matter in neutron stars, and
establish the scientific foundation for innovative applications of nuclear
science to society. FRIB will be essential for gaining access to key regions of
the nuclear chart, where the measured nuclear properties will challenge
established concepts, and highlight shortcomings and needed modifications to
current theory. Conversely, nuclear theory will play a critical role in
providing the intellectual framework for the science at FRIB, and will provide
invaluable guidance to FRIB's experimental programs. This article overviews the
broad scope of the FRIB theory effort, which reaches beyond the traditional
fields of nuclear structure and reactions, and nuclear astrophysics, to explore
exciting interdisciplinary boundaries with other areas.
\keywords{Nuclear Structure and Reactions. Nuclear
Astrophysics. Fundamental Interactions. High Performance
Computing. Rare Isotopes. Radioactive Beams.Comment: 20 pages, 7 figure
kLog: A Language for Logical and Relational Learning with Kernels
We introduce kLog, a novel approach to statistical relational learning.
Unlike standard approaches, kLog does not represent a probability distribution
directly. It is rather a language to perform kernel-based learning on
expressive logical and relational representations. kLog allows users to specify
learning problems declaratively. It builds on simple but powerful concepts:
learning from interpretations, entity/relationship data modeling, logic
programming, and deductive databases. Access by the kernel to the rich
representation is mediated by a technique we call graphicalization: the
relational representation is first transformed into a graph --- in particular,
a grounded entity/relationship diagram. Subsequently, a choice of graph kernel
defines the feature space. kLog supports mixed numerical and symbolic data, as
well as background knowledge in the form of Prolog or Datalog programs as in
inductive logic programming systems. The kLog framework can be applied to
tackle the same range of tasks that has made statistical relational learning so
popular, including classification, regression, multitask learning, and
collective classification. We also report about empirical comparisons, showing
that kLog can be either more accurate, or much faster at the same level of
accuracy, than Tilde and Alchemy. kLog is GPLv3 licensed and is available at
http://klog.dinfo.unifi.it along with tutorials
An investigation of vegetation and other Earth resource/feature parameters using LANDSAT and other remote sensing data. 1: LANDSAT. 2: Remote sensing of volcanic emissions
There are no author-identified significant results in this report
Mal-Netminer: Malware Classification Approach based on Social Network Analysis of System Call Graph
As the security landscape evolves over time, where thousands of species of
malicious codes are seen every day, antivirus vendors strive to detect and
classify malware families for efficient and effective responses against malware
campaigns. To enrich this effort, and by capitalizing on ideas from the social
network analysis domain, we build a tool that can help classify malware
families using features driven from the graph structure of their system calls.
To achieve that, we first construct a system call graph that consists of system
calls found in the execution of the individual malware families. To explore
distinguishing features of various malware species, we study social network
properties as applied to the call graph, including the degree distribution,
degree centrality, average distance, clustering coefficient, network density,
and component ratio. We utilize features driven from those properties to build
a classifier for malware families. Our experimental results show that
influence-based graph metrics such as the degree centrality are effective for
classifying malware, whereas the general structural metrics of malware are less
effective for classifying malware. Our experiments demonstrate that the
proposed system performs well in detecting and classifying malware families
within each malware class with accuracy greater than 96%.Comment: Mathematical Problems in Engineering, Vol 201
Software systems through complex networks science: Review, analysis and applications
Complex software systems are among most sophisticated human-made systems, yet
only little is known about the actual structure of 'good' software. We here
study different software systems developed in Java from the perspective of
network science. The study reveals that network theory can provide a prominent
set of techniques for the exploratory analysis of large complex software
system. We further identify several applications in software engineering, and
propose different network-based quality indicators that address software
design, efficiency, reusability, vulnerability, controllability and other. We
also highlight various interesting findings, e.g., software systems are highly
vulnerable to processes like bug propagation, however, they are not easily
controllable
- …