35 research outputs found

    Experimental Investigation of Stochastic Parafoil Guidance using a Graphics Processing Unit

    Full text link
    Control of autonomous systems subject to stochastic uncertainty is a challenging task. In guided airdrop applications, random wind disturbances play a crucial role in determining landing accuracy and terrain avoidance. This paper describes a stochastic parafoil guidance system which couples uncertainty propagation with optimal control to protect against wind and parameter uncertainty in the presence of impact area obstacles. The algorithm uses real-time Monte Carlo simulation performed on a graphics processing unit (GPU) to evaluate robustness of candidate trajectories in terms of delivery accuracy, obstacle avoidance, and other considerations. Building upon prior theoretical developments, this paper explores performance of the stochastic guidance law compared to standard deterministic guidance schemes, particularly with respect to obstacle avoidance. Flight test results are presented comparing the proposed stochastic guidance algorithm with a standard deterministic one. Through a comprehensive set of simulation results, key implementation aspects of the stochastic algorithm are explored including tradeoffs between the number of candidate trajectories considered, algorithm runtime, and overall guidance performance. Overall, simulation and flight test results demonstrate that the stochastic guidance scheme provides a more robust approach to obstacle avoidance while largely maintaining delivery accuracy

    Experimental investigation of stochastic parafoil guidance using a graphics processing unit

    Get PDF
    a b s t r a c t Control of autonomous systems subject to stochastic uncertainty is a challenging task. In guided airdrop applications, random wind disturbances play a crucial role in determining landing accuracy and terrain avoidance. This paper describes a stochastic parafoil guidance system which couples uncertainty propagation with optimal control to protect against wind and parameter uncertainty in the presence of impact area obstacles. The algorithm uses real-time Monte Carlo simulation performed on a graphics processing unit (GPU) to evaluate robustness of candidate trajectories in terms of delivery accuracy, obstacle avoidance, and other considerations. Building upon prior theoretical developments, this paper explores performance of the stochastic guidance law compared to standard deterministic guidance schemes, particularly with respect to obstacle avoidance. Flight test results are presented comparing the proposed stochastic guidance algorithm with a standard deterministic one. Through a comprehensive set of simulation results, key implementation aspects of the stochastic algorithm are explored including tradeoffs between the number of candidate trajectories considered, algorithm runtime, and overall guidance performance. Overall, simulation and flight test results demonstrate that the stochastic guidance scheme provides a more robust approach to obstacle avoidance while largely maintaining delivery accuracy

    GPU-based Fast Cone Beam CT Reconstruction from Undersampled and Noisy Projection Data via Total Variation

    Full text link
    Purpose: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. Methods: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. We developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multi-grid technique is also employed. Results: It is found that 20~40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 sec on a NVIDIA Tesla C1060 GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that our algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mAs/projection. Comparing with currently widely used full-fan head and neck scanning protocol of ~360 projections with 0.4 mAs/projection, it is estimated that an overall 36~72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. Conclusions: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.Comment: Accepted as a letter in Med. Phys., brief clarifying comments and updated references. 6 pages and 2 figure

    Physically-based interactive schlieren flow visualization

    Get PDF
    Journal ArticleUnderstanding fluid flow is a difficult problem and of increasing importance as computational fluid dynamics produces an abundance of simulation data. Experimental flow analysis has employed techniques such as shadowgraph and schlieren imaging for centuries which allow empirical observation of inhomogeneous flows. Shadowgraphs provide an intuitive way of looking at small changes in flow dynamics through caustic effects while schlieren cutoffs introduce an intensity gradation for observing large scale directional changes in the flow. The combination of these shading effects provides an informative global analysis of overall fluid flow. Computational solutions for these methods have proven too complex until recently due to the fundamental physical interaction of light refracting through the flow field. In this paper, we introduce a novel method to simulate the refraction of light to generate synthetic shadowgraphs and schlieren images of time-varying scalar fields derived from computational fluid dynamics (CFD) data. Our method computes physically accurate schlieren and shadowgraph images at interactive rates by utilizing a combination of GPGPU programming, acceleration methods, and data-dependent probabilistic schlieren cutoffs. Results comparing this method to previous schlieren approximations are presented

    Fast Monte Carlo Simulations for Quality Assurance in Radiation Therapy

    Get PDF
    Monte Carlo (MC) simulation is generally considered to be the most accurate method for dose calculation in radiation therapy. However, it suffers from the low simulation efficiency (hours to days) and complex configuration, which impede its applications in clinical studies. The recent rise of MRI-guided radiation platform (e.g. ViewRay’s MRIdian system) brings urgent need of fast MC algorithms because the introduced strong magnetic field may cause big errors to other algorithms. My dissertation focuses on resolving the conflict between accuracy and efficiency of MC simulations through 4 different approaches: (1) GPU parallel computation, (2) Transport mechanism simplification, (3) Variance reduction, (4) DVH constraint. Accordingly, we took several steps to thoroughly study the performance and accuracy influence of these methods. As a result, three Monte Carlo simulation packages named gPENELOPE, gDPMvr and gDVH were developed for subtle balance between performance and accuracy in different application scenarios. For example, the most accurate gPENELOPE is usually used as golden standard for radiation meter model, while the fastest gDVH is usually used for quick in-patient dose calculation, which significantly reduces the calculation time from 5 hours to 1.2 minutes (250 times faster) with only 1% error introduced. In addition, a cross-platform GUI integrating simulation kernels and 3D visualization was developed to make the toolkit more user-friendly. After the fast MC infrastructure was established, we successfully applied it to four radiotherapy scenarios: (1) Validate the vender provided Co60 radiation head model by comparing the dose calculated by gPENELOPE to experiment data; (2) Quantitatively study the effect of magnetic field to dose distribution and proposed a strategy to improve treatment planning efficiency; (3) Evaluate the accuracy of the build-in MC algorithm of MRIdian’s treatment planning system. (4) Perform quick quality assurance (QA) for the “online adaptive radiation therapy” that doesn’t permit enough time to perform experiment QA. Many other time-sensitive applications (e.g. motional dose accumulation) will also benefit a lot from our fast MC infrastructure

    GPU-based Iterative Cone Beam CT Reconstruction Using Tight Frame Regularization

    Full text link
    X-ray imaging dose from serial cone-beam CT (CBCT) scans raises a clinical concern in most image guided radiation therapy procedures. It is the goal of this paper to develop a fast GPU-based algorithm to reconstruct high quality CBCT images from undersampled and noisy projection data so as to lower the imaging dose. For this purpose, we have developed an iterative tight frame (TF) based CBCT reconstruction algorithm. A condition that a real CBCT image has a sparse representation under a TF basis is imposed in the iteration process as regularization to the solution. To speed up the computation, a multi-grid method is employed. Our GPU implementation has achieved high computational efficiency and a CBCT image of resolution 512\times512\times70 can be reconstructed in ~5 min. We have tested our algorithm on a digital NCAT phantom and a physical Catphan phantom. It is found that our TF-based algorithm is able to reconstrct CBCT in the context of undersampling and low mAs levels. We have also quantitatively analyzed the reconstructed CBCT image quality in terms of modulation-transfer-function and contrast-to-noise ratio under various scanning conditions. The results confirm the high CBCT image quality obtained from our TF algorithm. Moreover, our algorithm has also been validated in a real clinical context using a head-and-neck patient case. Comparisons of the developed TF algorithm and the current state-of-the-art TV algorithm have also been made in various cases studied in terms of reconstructed image quality and computation efficiency.Comment: 24 pages, 8 figures, accepted by Phys. Med. Bio

    Estimating Gaussian Mixture Autoregressive model with Sequential Monte Carlo algorithm: A parallel GPU implementation

    Get PDF
    In this paper, we propose using Bayesian sequential Monte Carlo (SMC) algorithm to estimate the univariate Gaussian mixture autoregressive (GMAR) model. The prominent benefit of the Bayesian approach is that the stationarity restriction required by the GAMR model can be straightforwardly imposed via prior distribution. In addition, compared to MCMC (Markov Chain Monte Carlo) and other simulation based algorithms, the SMC is robust to multimodal posteriors, and capable of providing fast on-line estimation when new data is available. Furthermore, it has a linear computational complexity and is ready for parallelism. To demostrate the SMC, an empirical application with US GDP growth data is considered. After estimation, we conduct the Bayesian model selection to evaluate the empirical evidence for different GMAR models. To facilitate the realization of this compute-intensive estimation, we parallelize the SMC algorithm on a nVidia CUDA compatible Graphical Process Unit (GPU) card

    GPU-based Fast Low-dose Cone Beam CT Reconstruction via Total Variation

    Full text link
    Cone-beam CT (CBCT) has been widely used in image guided radiation therapy (IGRT) to acquire updated volumetric anatomical information before treatment fractions for accurate patient alignment purpose. However, the excessive x-ray imaging dose from serial CBCT scans raises a clinical concern in most IGRT procedures. The excessive imaging dose can be effectively reduced by reducing the number of x-ray projections and/or lowering mAs levels in a CBCT scan. The goal of this work is to develop a fast GPU-based algorithm to reconstruct high quality CBCT images from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. We developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multi-grid technique is also employed. We test our CBCT reconstruction algorithm on a digital NCAT phantom and a head-and-neck patient case. The performance under low mAs is also validated using a physical Catphan phantom and a head-and-neck Rando phantom. It is found that 40 x-ray projections are sufficient to reconstruct CBCT images with satisfactory quality for IGRT patient alignment purpose. Phantom experiments indicated that CBCT images can be successfully reconstructed with our algorithm under as low as 0.1 mAs/projection level. Comparing with currently widely used full-fan head-and-neck scanning protocol of about 360 projections with 0.4 mAs/projection, it is estimated that an overall 36 times dose reduction has been achieved with our algorithm. Moreover, the reconstruction time is about 130 sec on an NVIDIA Tesla C1060 GPU card, which is estimated ~100 times faster than similar iterative reconstruction approaches.Comment: 20 pages, 10 figures, Paper was revised and more testing cases were added

    GPU-based Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization

    Full text link
    High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, TV regularization may lead to over-smoothed images and lost edge information. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to avoid over-smoothing, which is realized by introducing a penalty weight to the original total variation norm. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of fine structures and therefore maintain acceptable spatial resolution.Comment: 21 pages, 6 figures, 2 table
    corecore