35,701 research outputs found
Recommended from our members
Prediction of progression in idiopathic pulmonary fibrosis using CT scans atbaseline: A quantum particle swarm optimization - Random forest approach
Idiopathic pulmonary fibrosis (IPF) is a fatal lung disease characterized by an unpredictable progressive declinein lung function. Natural history of IPF is unknown and the prediction of disease progression at the time ofdiagnosis is notoriously difficult. High resolution computed tomography (HRCT) has been used for the diagnosisof IPF, but not generally for monitoring purpose. The objective of this work is to develop a novel predictivemodel for the radiological progression pattern at voxel-wise level using only baseline HRCT scans. Mainly, thereare two challenges: (a) obtaining a data set of features for region of interest (ROI) on baseline HRCT scans andtheir follow-up status; and (b) simultaneously selecting important features from high-dimensional space, andoptimizing the prediction performance. We resolved the first challenge by implementing a study design andhaving an expert radiologist contour ROIs at baseline scans, depending on its progression status in follow-upvisits. For the second challenge, we integrated the feature selection with prediction by developing an algorithmusing a wrapper method that combines quantum particle swarm optimization to select a small number of featureswith random forest to classify early patterns of progression. We applied our proposed algorithm to analyzeanonymized HRCT images from 50 IPF subjects from a multi-center clinical trial. We showed that it yields aparsimonious model with 81.8% sensitivity, 82.2% specificity and an overall accuracy rate of 82.1% at the ROIlevel. These results are superior to other popular feature selections and classification methods, in that ourmethod produces higher accuracy in prediction of progression and more balanced sensitivity and specificity witha smaller number of selected features. Our work is the first approach to show that it is possible to use onlybaseline HRCT scans to predict progressive ROIs at 6 months to 1year follow-ups using artificial intelligence
Recommended from our members
Fast, non-monte-carlo estimation of transient performance variation due to device mismatch
This paper describes an efficient way of simulating the effects of device random mismatch on circuit transient characteristics, such as variations in delay or in frequency. The proposed method models DC random offsets as equivalent AC pseudo-noises and leverages the fast, linear periodically time-varying (LPTV) noise analysis available from RF circuit simulators. Therefore, the method can be considered as an extension to DC match analysis and offers a large speed-up compared to the traditional Monte-Carlo analysis. Although the assumed linear perturbation model is valid only for small variations, it enables easy ways to estimate correlations among variations and identify the most sensitive design parameters to mismatch, all at no additional simulation cost. Three benchmarks measuring the variations in the input offset voltage of a clocked comparator, the delay of a logic path, and the frequency of an oscillator demonstrate the speed improvement of about 100-1000x compared to a 1000-point Monte-Carlo method
Building Gene Expression Profile Classifiers with a Simple and Efficient Rejection Option in R
Background: The collection of gene expression profiles from DNA microarrays and their analysis with pattern recognition algorithms is a powerful technology applied to several biological problems. Common pattern recognition systems classify samples assigning them to a set of known classes. However, in a clinical diagnostics setup, novel and unknown classes (new pathologies) may appear and one must be able to reject those samples that do not fit the trained model. The problem of implementing a rejection option in a multi-class classifier has not been widely addressed in the statistical literature. Gene expression profiles represent a critical case study since they suffer from the curse of dimensionality problem that negatively reflects on the reliability of both traditional rejection models and also more recent approaches such as one-class classifiers. Results: This paper presents a set of empirical decision rules that can be used to implement a rejection option in a set of multi-class classifiers widely used for the analysis of gene expression profiles. In particular, we focus on the classifiers implemented in the R Language and Environment for Statistical Computing (R for short in the remaining of this paper). The main contribution of the proposed rules is their simplicity, which enables an easy integration with available data analysis environments. Since in the definition of a rejection model tuning of the involved parameters is often a complex and delicate task, in this paper we exploit an evolutionary strategy to automate this process. This allows the final user to maximize the rejection accuracy with minimum manual intervention. Conclusions: This paper shows how the use of simple decision rules can be used to help the use of complex machine learning algorithms in real experimental setups. The proposed approach is almost completely automated and therefore a good candidate for being integrated in data analysis flows in labs where the machine learning expertise required to tune traditional classifiers might not be availabl
Locality in Network Optimization
In probability theory and statistics notions of correlation among random
variables, decay of correlation, and bias-variance trade-off are fundamental.
In this work we introduce analogous notions in optimization, and we show their
usefulness in a concrete setting. We propose a general notion of correlation
among variables in optimization procedures that is based on the sensitivity of
optimal points upon (possibly finite) perturbations. We present a canonical
instance in network optimization (the min-cost network flow problem) that
exhibits locality, i.e., a setting where the correlation decays as a function
of the graph-theoretical distance in the network. In the case of warm-start
reoptimization, we develop a general approach to localize a given optimization
routine in order to exploit locality. We show that the localization mechanism
is responsible for introducing a bias in the original algorithm, and that the
bias-variance trade-off that emerges can be exploited to minimize the
computational complexity required to reach a prescribed level of error
accuracy. We provide numerical evidence to support our claims
Prospects and Limitations of Algorithmic Cooling
Heat-bath algorithmic cooling (AC) of spins is a theoretically powerful
effective cooling approach, that (ideally) cools spins with low polarization
exponentially better than cooling by reversible entropy manipulations alone.
Here, we investigate the limitations and prospects of AC. For non-ideal and
semioptimal AC, we study the impact of finite relaxation times of reset and
computation spins on the achievable effective cooling. We derive, via
simulations, the attainable cooling levels for given ratios of relaxation times
using two semioptimal practicable algorithms. We expect this analysis to be
valuable for the planning of future experiments. For ideal and optimal AC, we
make use of lower bounds on the number of required reset steps, based on
entropy considerations, to present important consequences of using AC as a tool
for improving signal-to-noise ratio in liquid-state magnetic resonance
spectroscopy. We discuss the potential use of AC for noninvasive clinical
diagnosis and drug monitoring, where it may have significantly lower specific
absorption rate (SAR) with respect to currently used methods.Comment: 12 pages, 5 figure
Use of composite rotations to correct systematic errors in NMR quantum computation
We implement an ensemble quantum counting algorithm on three NMR
spectrometers with 1H resonance frequencies of 500, 600 and 750 MHz. At higher
frequencies, the results deviate markedly from naive theoretical predictions.
These systematic errors can be attributed almost entirely to off-resonance
effects, which can be substantially corrected for using fully-compensating
composite rotation pulse sequences originally developed by Tycko. We also
derive an analytic expression for generating such sequences with arbitrary
rotation angles.Comment: 8 pages RevTex including 7 PostScript figures (18 subfigures
- …