313,155 research outputs found
Benchmark Analysis of Representative Deep Neural Network Architectures
This work presents an in-depth analysis of the majority of the deep neural
networks (DNNs) proposed in the state of the art for image recognition. For
each DNN multiple performance indices are observed, such as recognition
accuracy, model complexity, computational complexity, memory usage, and
inference time. The behavior of such performance indices and some combinations
of them are analyzed and discussed. To measure the indices we experiment the
use of DNNs on two different computer architectures, a workstation equipped
with a NVIDIA Titan X Pascal and an embedded system based on a NVIDIA Jetson
TX1 board. This experimentation allows a direct comparison between DNNs running
on machines with very different computational capacity. This study is useful
for researchers to have a complete view of what solutions have been explored so
far and in which research directions are worth exploring in the future; and for
practitioners to select the DNN architecture(s) that better fit the resource
constraints of practical deployments and applications. To complete this work,
all the DNNs, as well as the software used for the analysis, are available
online.Comment: Will appear in IEEE Acces
Automatic calcium scoring in low-dose chest CT using deep neural networks with dilated convolutions
Heavy smokers undergoing screening with low-dose chest CT are affected by
cardiovascular disease as much as by lung cancer. Low-dose chest CT scans
acquired in screening enable quantification of atherosclerotic calcifications
and thus enable identification of subjects at increased cardiovascular risk.
This paper presents a method for automatic detection of coronary artery,
thoracic aorta and cardiac valve calcifications in low-dose chest CT using two
consecutive convolutional neural networks. The first network identifies and
labels potential calcifications according to their anatomical location and the
second network identifies true calcifications among the detected candidates.
This method was trained and evaluated on a set of 1744 CT scans from the
National Lung Screening Trial. To determine whether any reconstruction or only
images reconstructed with soft tissue filters can be used for calcification
detection, we evaluated the method on soft and medium/sharp filter
reconstructions separately. On soft filter reconstructions, the method achieved
F1 scores of 0.89, 0.89, 0.67, and 0.55 for coronary artery, thoracic aorta,
aortic valve and mitral valve calcifications, respectively. On sharp filter
reconstructions, the F1 scores were 0.84, 0.81, 0.64, and 0.66, respectively.
Linearly weighted kappa coefficients for risk category assignment based on per
subject coronary artery calcium were 0.91 and 0.90 for soft and sharp filter
reconstructions, respectively. These results demonstrate that the presented
method enables reliable automatic cardiovascular risk assessment in all
low-dose chest CT scans acquired for lung cancer screening
Recommended from our members
Artificial Immune Systems - Models, algorithms and applications
Copyright © 2010 Academic Research Publishing Agency.This article has been made available through the Brunel Open Access Publishing Fund.Artificial Immune Systems (AIS) are computational paradigms that belong to the computational intelligence family and are inspired by the biological immune system. During the past decade, they have attracted a lot of interest from researchers aiming to develop immune-based models and techniques to solve complex computational or engineering problems. This work presents a survey of existing AIS models and algorithms with a focus on the last five years.This article is available through the Brunel Open Access Publishing Fun
Finding Near-Optimal Independent Sets at Scale
The independent set problem is NP-hard and particularly difficult to solve in
large sparse graphs. In this work, we develop an advanced evolutionary
algorithm, which incorporates kernelization techniques to compute large
independent sets in huge sparse networks. A recent exact algorithm has shown
that large networks can be solved exactly by employing a branch-and-reduce
technique that recursively kernelizes the graph and performs branching.
However, one major drawback of their algorithm is that, for huge graphs,
branching still can take exponential time. To avoid this problem, we
recursively choose vertices that are likely to be in a large independent set
(using an evolutionary approach), then further kernelize the graph. We show
that identifying and removing vertices likely to be in large independent sets
opens up the reduction space---which not only speeds up the computation of
large independent sets drastically, but also enables us to compute high-quality
independent sets on much larger instances than previously reported in the
literature.Comment: 17 pages, 1 figure, 8 tables. arXiv admin note: text overlap with
arXiv:1502.0168
- …