1,711 research outputs found

    The grid-dose-spreading algorithm for dose distribution calculation in heavy charged particle radiotherapy

    Get PDF
    A new variant of the pencil-beam (PB) algorithm for dose distribution calculation for radiotherapy with protons and heavier ions, the grid-dose spreading (GDS) algorithm, is proposed. The GDS algorithm is intrinsically faster than conventional PB algorithms due to approximations in convolution integral, where physical calculations are decoupled from simple grid-to-grid energy transfer. It was effortlessly implemented to a carbon-ion radiotherapy treatment planning system to enable realistic beam blurring in the field, which was absent with the broad-beam (BB) algorithm. For a typical prostate treatment, the slowing factor of the GDS algorithm relative to the BB algorithm was 1.4, which is a great improvement over the conventional PB algorithms with a typical slowing factor of several tens. The GDS algorithm is mathematically equivalent to the PB algorithm for horizontal and vertical coplanar beams commonly used in carbon-ion radiotherapy while dose deformation within the size of the pristine spread occurs for angled beams, which was within 3 mm for a single proton pencil beam of 3030^\circ incidence, and needs to be assessed against the clinical requirements and tolerances in practical situations.Comment: 7 pages, 3 figure

    Kernels for sequentially ordered data

    Get PDF
    We present a novel framework for learning with sequential data of any kind, such as multivariate time series, strings, or sequences of graphs. The main result is a ”sequentialization” that transforms any kernel on a given domain into a kernel for sequences in that domain. This procedure preserves properties such as positive definiteness, the associated kernel feature map is an ordered variant of sample (cross-)moments, and this sequentialized kernel is consistent in the sense that it converges to a kernel for paths if sequences converge to paths (by discretization). Further, classical kernels for sequences arise as special cases of this method. We use dynamic programming and low-rank techniques for tensors to provide efficient algorithms to compute this sequentialized kernel

    BioEM: GPU-accelerated computing of Bayesian inference of electron microscopy images

    Full text link
    In cryo-electron microscopy (EM), molecular structures are determined from large numbers of projection images of individual particles. To harness the full power of this single-molecule information, we use the Bayesian inference of EM (BioEM) formalism. By ranking structural models using posterior probabilities calculated for individual images, BioEM in principle addresses the challenge of working with highly dynamic or heterogeneous systems not easily handled in traditional EM reconstruction. However, the calculation of these posteriors for large numbers of particles and models is computationally demanding. Here we present highly parallelized, GPU-accelerated computer software that performs this task efficiently. Our flexible formulation employs CUDA, OpenMP, and MPI parallelization combined with both CPU and GPU computing. The resulting BioEM software scales nearly ideally both on pure CPU and on CPU+GPU architectures, thus enabling Bayesian analysis of tens of thousands of images in a reasonable time. The general mathematical framework and robust algorithms are not limited to cryo-electron microscopy but can be generalized for electron tomography and other imaging experiments
    corecore