30,582 research outputs found
Dimensions, Structures and Security of Networks
One of the main issues in modern network science is the phenomenon of
cascading failures of a small number of attacks. Here we define the dimension
of a network to be the maximal number of functions or features of nodes of the
network. It was shown that there exist linear networks which are provably
secure, where a network is linear, if it has dimension one, that the high
dimensions of networks are the mechanisms of overlapping communities, that
overlapping communities are obstacles for network security, and that there
exists an algorithm to reduce high dimensional networks to low dimensional ones
which simultaneously preserves all the network properties and significantly
amplifies security of networks. Our results explore that dimension is a
fundamental measure of networks, that there exist linear networks which are
provably secure, that high dimensional networks are insecure, and that security
of networks can be amplified by reducing dimensions.Comment: arXiv admin note: text overlap with arXiv:1310.804
Dynamic Load Balancing Strategies for Graph Applications on GPUs
Acceleration of graph applications on GPUs has found large interest due to
the ubiquitous use of graph processing in various domains. The inherent
\textit{irregularity} in graph applications leads to several challenges for
parallelization. A key challenge, which we address in this paper, is that of
load-imbalance. If the work-assignment to threads uses node-based graph
partitioning, it can result in skewed task-distribution, leading to poor
load-balance. In contrast, if the work-assignment uses edge-based graph
partitioning, the load-balancing is better, but the memory requirement is
relatively higher. This makes it unsuitable for large graphs. In this work, we
propose three techniques for improved load-balancing of graph applications on
GPUs. Each technique brings in unique advantages, and a user may have to employ
a specific technique based on the requirement. Using Breadth First Search and
Single Source Shortest Paths as our processing kernels, we illustrate the
effectiveness of each of the proposed techniques in comparison to the existing
node-based and edge-based mechanisms
A hybrid machine learning model to study UV-Vis spectra of gold nano spheres
Here, we have employed Principal Component Analysis (PCA) and Linear
Discriminant Analysis (LDA) to analyze Mie calculated UV-Vis spectra of gold
nanospheres (GNS). Eigen spectra of PCA perform the Fano type resonances.3D
vector field spectra reveal the Homoclinic orbit Lorenz attractor. Quantum
confinement effects are observed by 3D representation of LDA. Standing wave
patterns resulting from oscillations of ion acoustic phonon and electron waves
are illustrated through the eigen spectra of LDA. Such capabilities of GNPs
have brought high attention for the high energy density physics applications.
Furthermore, accurate prediction of gold nanoparticle (GNP) sizes using machine
learning could provide rapid analysis without the need for expensive analysis.
Two hybrid algorithms consist of unsupervised PCA and two different supervised
ANN have been used to estimate the diameters of GNPs. PCA based artificial
neural network (ANN) were found to estimate the diameters with a high accuracy
Network-based statistical comparison of citation topology of bibliographic databases
Modern bibliographic databases provide the basis for scientific research and
its evaluation. While their content and structure differ substantially, there
exist only informal notions on their reliability. Here we compare the
topological consistency of citation networks extracted from six popular
bibliographic databases including Web of Science, CiteSeer and arXiv.org. The
networks are assessed through a rich set of local and global graph statistics.
We first reveal statistically significant inconsistencies between some of the
databases with respect to individual statistics. For example, the introduced
field bow-tie decomposition of DBLP Computer Science Bibliography substantially
differs from the rest due to the coverage of the database, while the citation
information within arXiv.org is the most exhaustive. Finally, we compare the
databases over multiple graph statistics using the critical difference diagram.
The citation topology of DBLP Computer Science Bibliography is the least
consistent with the rest, while, not surprisingly, Web of Science is
significantly more reliable from the perspective of consistency. This work can
serve either as a reference for scholars in bibliometrics and scientometrics or
a scientific evaluation guideline for governments and research agencies.Comment: 16 pages, 3 figures, 3 table
Designing high-speed, low-power full adder cells based on carbon nanotube technology
This article presents novel high speed and low power full adder cells based
on carbon nanotube field effect transistor (CNFET). Four full adder cells are
proposed in this article. First one (named CN9P4G) and second one (CN9P8GBUFF)
utilizes 13 and 17 CNFETs respectively. Third design that we named CN10PFS uses
only 10 transistors and is full swing. Finally, CN8P10G uses 18 transistors and
divided into two modules, causing Sum and Cout signals are produced in a
parallel manner. All inputs have been used straight, without inverting. These
designs also used the special feature of CNFET that is controlling the
threshold voltage by adjusting the diameters of CNFETs to achieve the best
performance and right voltage levels. All simulation performed using Synopsys
HSPICE software and the proposed designs are compared to other classical and
modern CMOS and CNFET-based full adder cells in terms of delay, power
consumption and power delay product.Comment: 13 Pages, 13 Figures, 2 Table
Semi-Automatic RECIST Labeling on CT Scans with Cascaded Convolutional Neural Networks
Response evaluation criteria in solid tumors (RECIST) is the standard
measurement for tumor extent to evaluate treatment responses in cancer
patients. As such, RECIST annotations must be accurate. However, RECIST
annotations manually labeled by radiologists require professional knowledge and
are time-consuming, subjective, and prone to inconsistency among different
observers. To alleviate these problems, we propose a cascaded convolutional
neural network based method to semi-automatically label RECIST annotations and
drastically reduce annotation time. The proposed method consists of two stages:
lesion region normalization and RECIST estimation. We employ the spatial
transformer network (STN) for lesion region normalization, where a localization
network is designed to predict the lesion region and the transformation
parameters with a multi-task learning strategy. For RECIST estimation, we adapt
the stacked hourglass network (SHN), introducing a relationship constraint loss
to improve the estimation precision. STN and SHN can both be learned in an
end-to-end fashion. We train our system on the DeepLesion dataset, obtaining a
consensus model trained on RECIST annotations performed by multiple
radiologists over a multi-year period. Importantly, when judged against the
inter-reader variability of two additional radiologist raters, our system
performs more stably and with less variability, suggesting that RECIST
annotations can be reliably obtained with reduced labor and time.Comment: Accepted by MICCAI 201
Justifying the small-world phenomenon via random recursive trees
We present a new technique for proving logarithmic upper bounds for diameters
of evolving random graph models, which is based on defining a coupling between
random graphs and variants of random recursive trees. The advantage of the
technique is three-fold: it is quite simple and provides short proofs, it is
applicable to a broad variety of models including those incorporating
preferential attachment, and it provides bounds with small constants. We
illustrate this by proving, for the first time, logarithmic upper bounds for
the diameters of the following well known models: the forest fire model, the
copying model, the PageRank-based selection model, the Aiello-Chung-Lu models,
the generalized linear preference model, directed scale-free graphs, the
Cooper-Frieze model, and random unordered increasing -trees. Our results
shed light on why the small-world phenomenon is observed in so many real-world
graphs.Comment: 30 page
Construction of Four Completely Independent Spanning Trees on Augmented Cubes
Let T1, T2,..., Tk be spanning trees in a graph G. If for any pair of
vertices {u, v} of G, the paths between u and v in every Ti( 0 < i < k+1) do
not contain common edges and common vertices, except the vertices u and v, then
T1, T2,..., Tk are called completely independent spanning trees in G. The
n-dimensional augmented cube, denoted as AQn, a variation of the hypercube
possesses several embeddable properties that the hypercube and its variations
do not possess. For AQn (n > 5), construction of 4 completely independent
spanning trees of which two trees with diameters 2n - 5 and two trees with
diameters 2n - 3 are given
Topological Features of Online Social Networks
The importance of modeling and analyzing Social Networks is a consequence of
the success of Online Social Networks during last years. Several models of
networks have been proposed, reflecting the different characteristics of Social
Networks. Some of them fit better to model specific phenomena, such as the
growth and the evolution of the Social Networks; others are more appropriate to
capture the topological characteristics of the networks. Because these networks
show unique and different properties and features, in this work we describe and
exploit several models in order to capture the structure of popular Online
Social Networks, such as Arxiv, Facebook, Wikipedia and YouTube. Our
experimentation aims at verifying the structural characteristics of these
networks, in order to understand what model better depicts their structure, and
to analyze the inner community structure, to illustrate how members of these
Online Social Networks interact and group together into smaller communities
Real-time 3D Shape Instantiation for Partially-deployed Stent Segment from a Single 2D Fluoroscopic Image in Robot-assisted Fenestrated Endovascular Aortic Repair
In robot-assisted Fenestrated Endovascular Aortic Repair (FEVAR), accurate
alignment of stent graft fenestrations or scallops with aortic branches is
essential for establishing complete blood flow perfusion. Current navigation is
largely based on 2D fluoroscopic images, which lacks 3D anatomical information,
thus causing longer operation time as well as high risks of radiation exposure.
Previously, 3D shape instantiation frameworks for real-time 3D shape
reconstruction of fully-deployed or fully-compressed stent graft from a single
2D fluoroscopic image have been proposed for 3D navigation in robot-assisted
FEVAR. However, these methods could not instantiate partially-deployed stent
segments, as the 3D marker references are unknown. In this paper, an adapted
Graph Convolutional Network (GCN) is proposed to predict 3D marker references
from 3D fully-deployed markers. As original GCN is for classification, in this
paper, the coarsening layers are removed and the softmax function at the
network end is replaced with linear mapping for the regression task. The
derived 3D and the 2D marker references are used to instantiate
partially-deployed stent segment shape with the existing 3D shape instantiation
framework. Validations were performed on three commonly used stent grafts and
five patient-specific 3D printed aortic aneurysm phantoms. Comparable
performances with average mesh distance errors of 13mm and average
angular errors around 7degree were achieved.Comment: 8 pages, 10 figure
- …