46,462 research outputs found
Deep Learning How to Fit an Intravoxel Incoherent Motion Model to Diffusion-Weighted MRI
Purpose: This prospective clinical study assesses the feasibility of training
a deep neural network (DNN) for intravoxel incoherent motion (IVIM) model
fitting to diffusion-weighted magnetic resonance imaging (DW-MRI) data and
evaluates its performance. Methods: In May 2011, ten male volunteers (age
range: 29 to 53 years, mean: 37 years) underwent DW-MRI of the upper abdomen on
1.5T and 3.0T magnetic resonance scanners. Regions of interest in the left and
right liver lobe, pancreas, spleen, renal cortex, and renal medulla were
delineated independently by two readers. DNNs were trained for IVIM model
fitting using these data; results were compared to least-squares and Bayesian
approaches to IVIM fitting. Intraclass Correlation Coefficients (ICC) were used
to assess consistency of measurements between readers. Intersubject variability
was evaluated using Coefficients of Variation (CV). The fitting error was
calculated based on simulated data and the average fitting time of each method
was recorded. Results: DNNs were trained successfully for IVIM parameter
estimation. This approach was associated with high consistency between the two
readers (ICCs between 50 and 97%), low intersubject variability of estimated
parameter values (CVs between 9.2 and 28.4), and the lowest error when compared
with least-squares and Bayesian approaches. Fitting by DNNs was several orders
of magnitude quicker than the other methods but the networks may need to be
re-trained for different acquisition protocols or imaged anatomical regions.
Conclusion: DNNs are recommended for accurate and robust IVIM model fitting to
DW-MRI data. Suitable software is available at (1)
NARX-based nonlinear system identification using orthogonal least squares basis hunting
An orthogonal least squares technique for basis hunting (OLS-BH) is proposed to construct sparse radial basis function (RBF) models for NARX-type nonlinear systems. Unlike most of the existing RBF or kernel modelling methods, whichplaces the RBF or kernel centers at the training input data points and use a fixed common variance for all the regressors, the proposed OLS-BH technique tunes the RBF center and diagonal covariance matrix of individual regressor by minimizing the training mean square error. An efficient optimization method isadopted for this basis hunting to select regressors in an orthogonal forward selection procedure. Experimental results obtained using this OLS-BH technique demonstrate that it offers a state-of-the-art method for constructing parsimonious RBF models with excellent generalization performance
Surrogate modeling approximation using a mixture of experts based on EM joint estimation
An automatic method to combine several local surrogate models is presented. This method is intended to build accurate and smooth approximation of discontinuous functions that are to be used in structural optimization problems. It strongly relies on the Expectation-Maximization (EM) algorithm for Gaussian mixture models (GMM). To the end of regression, the inputs are clustered together with their output values by means of parameter estimation of the joint distribution. A local expert is then built (linear, quadratic, artificial neural network, moving least squares) on each cluster. Lastly, the local experts are combined using the Gaussian mixture model parameters found by the EM algorithm to obtain a global model. This method is tested over both mathematical test cases and an engineering optimization problem from aeronautics and is found to improve the accuracy of the approximation
A recurrent neural network for classification of unevenly sampled variable stars
Astronomical surveys of celestial sources produce streams of noisy time
series measuring flux versus time ("light curves"). Unlike in many other
physical domains, however, large (and source-specific) temporal gaps in data
arise naturally due to intranight cadence choices as well as diurnal and
seasonal constraints. With nightly observations of millions of variable stars
and transients from upcoming surveys, efficient and accurate discovery and
classification techniques on noisy, irregularly sampled data must be employed
with minimal human-in-the-loop involvement. Machine learning for inference
tasks on such data traditionally requires the laborious hand-coding of
domain-specific numerical summaries of raw data ("features"). Here we present a
novel unsupervised autoencoding recurrent neural network (RNN) that makes
explicit use of sampling times and known heteroskedastic noise properties. When
trained on optical variable star catalogs, this network produces supervised
classification models that rival other best-in-class approaches. We find that
autoencoded features learned on one time-domain survey perform nearly as well
when applied to another survey. These networks can continue to learn from new
unlabeled observations and may be used in other unsupervised tasks such as
forecasting and anomaly detection.Comment: 23 pages, 14 figures. The published version is at Nature Astronomy
(https://www.nature.com/articles/s41550-017-0321-z). Source code for models,
experiments, and figures at
https://github.com/bnaul/IrregularTimeSeriesAutoencoderPaper (Zenodo Code
DOI: 10.5281/zenodo.1045560
- …