120,606 research outputs found

    Whole-brain substitute CT generation using Markov random field mixture models

    Full text link
    Computed tomography (CT) equivalent information is needed for attenuation correction in PET imaging and for dose planning in radiotherapy. Prior work has shown that Gaussian mixture models can be used to generate a substitute CT (s-CT) image from a specific set of MRI modalities. This work introduces a more flexible class of mixture models for s-CT generation, that incorporates spatial dependency in the data through a Markov random field prior on the latent field of class memberships associated with a mixture model. Furthermore, the mixture distributions are extended from Gaussian to normal inverse Gaussian (NIG), allowing heavier tails and skewness. The amount of data needed to train a model for s-CT generation is of the order of 100 million voxels. The computational efficiency of the parameter estimation and prediction methods are hence paramount, especially when spatial dependency is included in the models. A stochastic Expectation Maximization (EM) gradient algorithm is proposed in order to tackle this challenge. The advantages of the spatial model and NIG distributions are evaluated with a cross-validation study based on data from 14 patients. The study show that the proposed model enhances the predictive quality of the s-CT images by reducing the mean absolute error with 17.9%. Also, the distribution of CT values conditioned on the MR images are better explained by the proposed model as evaluated using continuous ranked probability scores

    On assessing accuracy of numerical solutions of porous media models

    Get PDF
    Error estimation techniques [1, 2] are widely used as the solution verification in the numerical simulations. One can employ either a posteriori error estimation based on known quantities in the solution or a priori error estimation by controlling the error parameters in unknown quantities. This study proposes a posteriori criteria for finite element approximation of the problems in flow through porous media. We introduce a technique to verify the numerical solution of Darcy and Darcy–Brinkman models and their modifications. We show that for given boundary conditions of all kinematically admissible vector fields, the Darcy and the Darcy–Brinkman velocities have minimum total dissipation. Our proposed dissipation is a parameter for accuracy assessment and grid convergence studies. This solution verification technique is not only mathematical method such as error measurements in energy or other standard norms, but also has a firm physical basis. Moreover, our assessment is applicable for adaptive (h-version) finite element approximations and for the domain contains the nonsmooth or polluted solution [3]. To support our theory, the proposed dissipation is used to verify the numerical solution of some problems in the flow through porous media. REFERENCES [1] Ainsworth, M., Oden, J.T. A posteriori error estimation in finite element analysis. Computer Methods in Applied Mechanics and Engineering. 1997, 142:1–88. [2] Roache, P.J. Verification and Validation in Computational Science and Engineering. Hermosa Publishers, New Mexico; 1998. [3] Babuška, I., Oh, H.S. Pollution problem of the p- and hp-versions of the finite element method. Communications in Applied Numerical Methods. 1987, 3:553–561

    When to Impute? Imputation before and during cross-validation

    Full text link
    Cross-validation (CV) is a technique used to estimate generalization error for prediction models. For pipeline modeling algorithms (i.e. modeling procedures with multiple steps), it has been recommended the entire sequence of steps be carried out during each replicate of CV to mimic the application of the entire pipeline to an external testing set. While theoretically sound, following this recommendation can lead to high computational costs when a pipeline modeling algorithm includes computationally expensive operations, e.g. imputation of missing values. There is a general belief that unsupervised variable selection (i.e. ignoring the outcome) can be applied before conducting CV without incurring bias, but there is less consensus for unsupervised imputation of missing values. We empirically assessed whether conducting unsupervised imputation prior to CV would result in biased estimates of generalization error or result in poorly selected tuning parameters and thus degrade the external performance of downstream models. Results show that despite optimistic bias, the reduced variance of imputation before CV compared to imputation during each replicate of CV leads to a lower overall root mean squared error for estimation of the true external R-squared and the performance of models tuned using CV with imputation before versus during each replication is minimally different. In conclusion, unsupervised imputation before CV appears valid in certain settings and may be a helpful strategy that enables analysts to use more flexible imputation techniques without incurring high computational costs.Comment: 11 pages (main text, not including references), 6 tables, and 4 figures. Code to replicate manuscript available at https://github.com/bcjaeger/Imputation-and-C

    Bootstrapping the Out-of-sample Predictions for Efficient and Accurate Cross-Validation

    Full text link
    Cross-Validation (CV), and out-of-sample performance-estimation protocols in general, are often employed both for (a) selecting the optimal combination of algorithms and values of hyper-parameters (called a configuration) for producing the final predictive model, and (b) estimating the predictive performance of the final model. However, the cross-validated performance of the best configuration is optimistically biased. We present an efficient bootstrap method that corrects for the bias, called Bootstrap Bias Corrected CV (BBC-CV). BBC-CV's main idea is to bootstrap the whole process of selecting the best-performing configuration on the out-of-sample predictions of each configuration, without additional training of models. In comparison to the alternatives, namely the nested cross-validation and a method by Tibshirani and Tibshirani, BBC-CV is computationally more efficient, has smaller variance and bias, and is applicable to any metric of performance (accuracy, AUC, concordance index, mean squared error). Subsequently, we employ again the idea of bootstrapping the out-of-sample predictions to speed up the CV process. Specifically, using a bootstrap-based hypothesis test we stop training of models on new folds of statistically-significantly inferior configurations. We name the method Bootstrap Corrected with Early Dropping CV (BCED-CV) that is both efficient and provides accurate performance estimates.Comment: Added acknowledgment

    Simplification of multibody models by parameter reduction

    Full text link
    Model selection methods are used in different scientific contexts to represent a characteristic data set in terms of a reduced number of parameters. Apparently, these methods have not found their way into the literature on multibody systems dynamics. Multibody models can be considered parametric models in terms of their dynamic parameters, and model selection techniques can then be used to express these models in terms of a reduced number of parameters. These parameter-reduced models are expected to have a smaller computational complexity than the original one and still preserve the desired level of accuracy. They are also known to be good candidates for parameter estimation purposes. In this work, simulations of the actual model are used to define a data set that is representative of the system's standard working conditions. A parameter-reduced model is chosen and its parameter values estimated so that they minimize the prediction error on these data. To that end, model selection heuristics and normalized error measures are proposed. Using this methodology, two multibody systems with very different characteristic mobility are analyzed. Highly considerable reductions in the number of parameters and computational cost are obtained without compromising the accuracy of the reduced model too much. As an additional result, a generalization of the base parameter concept to the context of parameter-reduced models is proposed.Comment: 24 pages, 14 figure

    Fast and Accurate Uncertainty Estimation in Chemical Machine Learning

    Full text link
    We present a scheme to obtain an inexpensive and reliable estimate of the uncertainty associated with the predictions of a machine-learning model of atomic and molecular properties. The scheme is based on resampling, with multiple models being generated based on sub-sampling of the same training data. The accuracy of the uncertainty prediction can be benchmarked by maximum likelihood estimation, which can also be used to correct for correlations between resampled models, and to improve the performance of the uncertainty estimation by a cross-validation procedure. In the case of sparse Gaussian Process Regression models, this resampled estimator can be evaluated at negligible cost. We demonstrate the reliability of these estimates for the prediction of molecular energetics, and for the estimation of nuclear chemical shieldings in molecular crystals. Extension to estimate the uncertainty in energy differences, forces, or other correlated predictions is straightforward. This method can be easily applied to other machine learning schemes, and will be beneficial to make data-driven predictions more reliable, and to facilitate training-set optimization and active-learning strategies

    Bayesian bandwidth estimation for a nonparametric functional regression model with mixed types of regressors and unknown error density

    Full text link
    We investigate the issue of bandwidth estimation in a nonparametric functional regression model with function-valued, continuous real-valued and discrete-valued regressors under the framework of unknown error density. Extending from the recent work of Shang (2013, Computational Statistics & Data Analysis), we approximate the unknown error density by a kernel density estimator of residuals, where the regression function is estimated by the functional Nadaraya-Watson estimator that admits mixed types of regressors. We derive a kernel likelihood and posterior density for the bandwidth parameters under the kernel-form error density, and put forward a Bayesian bandwidth estimation approach that can simultaneously estimate the bandwidths. Simulation studies demonstrated the estimation accuracy of the regression function and error density for the proposed Bayesian approach. Illustrated by a spectroscopy data set in the food quality control, we applied the proposed Bayesian approach to select the optimal bandwidths in a nonparametric functional regression model with mixed types of regressors

    Multiresolution Tensor Learning for Efficient and Interpretable Spatial Analysis

    Get PDF
    Efficient and interpretable spatial analysis is crucial in many fields such as geology, sports, and climate science. Large-scale spatial data often contains complex higher-order correlations across features and locations. While tensor latent factor models can describe higher-order correlations, they are inherently computationally expensive to train. Furthermore, for spatial analysis, these models should not only be predictive but also be spatially coherent. However, latent factor models are sensitive to initialization and can yield inexplicable results. We develop a novel Multi-resolution Tensor Learning (MRTL) algorithm for efficiently learning interpretable spatial patterns. MRTL initializes the latent factors from an approximate full-rank tensor model for improved interpretability and progressively learns from a coarse resolution to the fine resolution for an enormous computation speedup. We also prove the theoretical convergence and computational complexity of MRTL. When applied to two real-world datasets, MRTL demonstrates 4 ~ 5 times speedup compared to a fixed resolution while yielding accurate and interpretable models

    A plug-in rule for bandwidth selection in circular density estimation

    Full text link
    A new plug-in rule procedure for bandwidth selection in kernel circular density estimation is introduced. The performance of this proposal is checked throughout a simulation study considering a variety of circular distributions exhibiting multimodality, peakedness and/or skewness. The plug-in rule behaviour is also compared with other existing bandwidth selectors. The method is illustrated with two classical datasets of cross-beds layers and animal orientation.Comment: 15 pages, 3 figure

    Stochastic Volatily Models using Hamiltonian Monte Carlo Methods and Stan

    Full text link
    This paper presents a study using the Bayesian approach in stochastic volatility models for modeling financial time series, using Hamiltonian Monte Carlo methods (HMC). We propose the use of other distributions for the errors in the observation equation of stochastic volatility models, besides the Gaussian distribution, to address problems as heavy tails and asymmetry in the returns. Moreover, we use recently developed information criteria WAIC and LOO that approximate the cross-validation methodology, to perform the selection of models. Throughout this work, we study the quality of the HMC methods through examples, simulation studies and applications to real data sets
    • …
    corecore