19,829 research outputs found

    A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    Get PDF
    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of 'partial' imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated

    Impact of incomplete ventricular coverage on diagnostic performance of myocardial perfusion imaging.

    Get PDF
    In the context of myocardial perfusion imaging (MPI) with cardiac magnetic resonance (CMR), there is ongoing debate on the merits of using technically complex acquisition methods to achieve whole-heart spatial coverage, rather than conventional 3-slice acquisition. An adequately powered comparative study is difficult to achieve given the requirement for two separate stress CMR studies in each patient. The aim of this work is to draw relevant conclusions from SPECT MPI by comparing whole-heart versus simulated 3-slice coverage in a large existing dataset. SPECT data from 651 patients with suspected coronary artery disease who underwent invasive angiography were analyzed. A computational approach was designed to model 3-slice MPI by retrospective subsampling of whole- heart data. For both whole-heart and 3-slice approaches, the diagnostic performance and the stress total perfusion deficit (TPD) score-a measure of ischemia extent/severity-were quantified and compared. Diagnostic accuracy for the 3-slice and whole-heart approaches were similar (area under the curve: 0.843 vs. 0.855, respectively; P = 0.07). The majority (54%) of cases missed by 3-slice imaging had primarily apical ischemia. Whole-heart and 3-slice TPD scores were strongly correlated (R2 = 0.93, P < 0.001) but 3-slice TPD showed a small yet significant bias compared to whole-heart TPD (- 1.19%; P < 0.0001) and the 95% limits of agreement were relatively wide (- 6.65% to 4.27%). Incomplete ventricular coverage typically acquired in 3-slice CMR MPI does not significantly affect the diagnostic accuracy. However, 3-slice MPI may fail to detect severe apical ischemia and underestimate the extent/severity of perfusion defects. Our results suggest that caution is required when comparing the ischemic burden between 3-slice and whole-heart datasets, and corroborate the need to establish prognostic thresholds specific to each approach

    Metamodel-based importance sampling for structural reliability analysis

    Full text link
    Structural reliability methods aim at computing the probability of failure of systems with respect to some prescribed performance functions. In modern engineering such functions usually resort to running an expensive-to-evaluate computational model (e.g. a finite element model). In this respect simulation methods, which may require 103610^{3-6} runs cannot be used directly. Surrogate models such as quadratic response surfaces, polynomial chaos expansions or kriging (which are built from a limited number of runs of the original model) are then introduced as a substitute of the original model to cope with the computational cost. In practice it is almost impossible to quantify the error made by this substitution though. In this paper we propose to use a kriging surrogate of the performance function as a means to build a quasi-optimal importance sampling density. The probability of failure is eventually obtained as the product of an augmented probability computed by substituting the meta-model for the original performance function and a correction term which ensures that there is no bias in the estimation even if the meta-model is not fully accurate. The approach is applied to analytical and finite element reliability problems and proves efficient up to 100 random variables.Comment: 20 pages, 7 figures, 2 tables. Preprint submitted to Probabilistic Engineering Mechanic

    Color fusion of magnetic resonance images improves intracranial volume measurement in studies of aging

    Get PDF
    Background: Comparison of intracranial volume (ICV) measurements in different subpopulations offers insight into age-related atrophic change and pathological loss of neuronal tissue. For such comparisons to be meaningful the accu-racy of ICV measurement is paramount. Color magnetic resonance images (MRI) have been utilised in several research applications and are reported to show promise in the clinical arena. Methods: We selected a sample of 150 older com-munity-dwelling individuals (age 71 to 72 years) representing a wide range of ICV, white matter lesions and atrophy. We compared the extraction of ICV by thresholding on T2 -weighted MR images followed by manual editing (refer-ence standard) done by an analyst trained in brain anatomy, with thresholding plus computational morphological opera-tions followed by manual editing on a framework of a color fusion technique (MCMxxxVI) and two automatic brain segmentation methods widely used, these last three done by two image analysts. Results: The range of ICV was 1074 to 1921 cm3 for the reference standard. The mean difference between the reference standard and the ICV measured using the technique that involved the color fusion was 2.7%, while it was 5.4% compared with any fully automatic tech-nique. However, the 95% confidence interval of the difference between the reference standard and each method was similar: it was 7% for the segmentation aided by the color fusion and was 7% and 8.3% for the two fully automatic methods tested. Conclusion: For studies of aging, the use of color fusion MRI in ICV segmentation in a semi-auto-matic framework delivered best results compared with a reference standard manual method. Fully automated meth-ods, while fast, all require manual editing to avoid significant errors and, in this post-processing step color fusion MRI is recommended

    Graphics Processing Units and High-Dimensional Optimization

    Full text link
    This paper discusses the potential of graphics processing units (GPUs) in high-dimensional optimization problems. A single GPU card with hundreds of arithmetic cores can be inserted in a personal computer and dramatically accelerates many statistical algorithms. To exploit these devices fully, optimization algorithms should reduce to multiple parallel tasks, each accessing a limited amount of data. These criteria favor EM and MM algorithms that separate parameters and data. To a lesser extent block relaxation and coordinate descent and ascent also qualify. We demonstrate the utility of GPUs in nonnegative matrix factorization, PET image reconstruction, and multidimensional scaling. Speedups of 100 fold can easily be attained. Over the next decade, GPUs will fundamentally alter the landscape of computational statistics. It is time for more statisticians to get on-board

    Efficient Computation of Expected Hypervolume Improvement Using Box Decomposition Algorithms

    Full text link
    In the field of multi-objective optimization algorithms, multi-objective Bayesian Global Optimization (MOBGO) is an important branch, in addition to evolutionary multi-objective optimization algorithms (EMOAs). MOBGO utilizes Gaussian Process models learned from previous objective function evaluations to decide the next evaluation site by maximizing or minimizing an infill criterion. A common criterion in MOBGO is the Expected Hypervolume Improvement (EHVI), which shows a good performance on a wide range of problems, with respect to exploration and exploitation. However, so far it has been a challenge to calculate exact EHVI values efficiently. In this paper, an efficient algorithm for the computation of the exact EHVI for a generic case is proposed. This efficient algorithm is based on partitioning the integration volume into a set of axis-parallel slices. Theoretically, the upper bound time complexities are improved from previously O(n2)O (n^2) and O(n3)O(n^3), for two- and three-objective problems respectively, to Θ(nlogn)\Theta(n\log n), which is asymptotically optimal. This article generalizes the scheme in higher dimensional case by utilizing a new hyperbox decomposition technique, which was proposed by D{\"a}chert et al, EJOR, 2017. It also utilizes a generalization of the multilayered integration scheme that scales linearly in the number of hyperboxes of the decomposition. The speed comparison shows that the proposed algorithm in this paper significantly reduces computation time. Finally, this decomposition technique is applied in the calculation of the Probability of Improvement (PoI)
    corecore