83 research outputs found

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Time dependent cone-beam CT reconstruction via a motion model optimized with forward iterative projection matching

    Get PDF
    The purpose of this work is to present the development and validation of a novel method for reconstructing time-dependent, or 4D, cone-beam CT (4DCBCT) images. 4DCBCT can have a variety of applications in the radiotherapy of moving targets, such as lung tumors, including treatment planning, dose verification, and real time treatment adaptation. However, in its current incarnation it suffers from poor reconstruction quality and limited temporal resolution that may restrict its efficacy. Our algorithm remedies these issues by deforming a previously acquired high quality reference fan-beam CT (FBCT) to match the projection data in the 4DCBCT data-set, essentially creating a 3D animation of the moving patient anatomy. This approach combines the high image quality of the FBCT with the fine temporal resolution of the raw 4DCBCT projection data-set. Deformation of the reference CT is accomplished via a patient specific motion model. The motion model is constrained spatially using eigenvectors generated by a principal component analysis (PCA) of patient motion data, and is regularized in time using parametric functions of a patient breathing surrogate recorded simultaneously with 4DCBCT acquisition. The parametric motion model is constrained using forward iterative projection matching (FIPM), a scheme which iteratively alters model parameters until digitally reconstructed radiographs (DRRs) cast through the deforming CT optimally match the projections in the raw 4DCBCT data-set. We term our method FIPM-PCA 4DCBCT. In developing our algorithm we proceed through three stages of development. In the first, we establish the mathematical groundwork for the algorithm and perform proof of concept testing on simulated data. In the second, we tune the algorithm for real world use; specifically we improve our DRR algorithm to achieve maximal realism by incorporating physical principles of image formation combined with empirical measurements of system properties. In the third stage we test our algorithm on actual patient data and evaluate its performance against gold standard and ground truth data-sets. In this phase we use our method to track the motion of an implanted fiducial marker and observe agreement with our gold standard data that is typically within a millimeter

    Proof-of-Concept For Converging Beam Small Animal Irradiator

    Get PDF
    The Monte Carlo particle simulator TOPAS, the multiphysics solver COMSOL., and several analytical radiation transport methods were employed to perform an in-depth proof-ofconcept for a high dose rate, high precision converging beam small animal irradiation platform. In the first aim of this work, a novel carbon nanotube-based compact X-ray tube optimized for high output and high directionality was designed and characterized. In the second aim, an optimization algorithm was developed to customize a collimator geometry for this unique Xray source to simultaneously maximize the irradiator’s intensity and precision. Then, a full converging beam irradiator apparatus was fit with a multitude of these X-ray tubes in a spherical array and designed to deliver converged dose spots to any location within a small animal model. This aim also included dose leakage calculations for estimation of appropriate external shielding. The result of this research will be the blueprints for a full preclinical radiation platform that pushes the boundaries of dose localization in small animal trials

    Enhancing Monte Carlo Particle Transport for Modern Many-Core Architectures

    Get PDF
    Since near the very beginning of electronic computing, Monte Carlo particle transport has been a fundamental approach for solving computational physics problems. Due to the high computational demands and inherently parallel nature of these applications, Monte Carlo transport applications are often performed in the supercomputing environment. That said, supercomputers are changing, as parallelism has dramatically increased with each supercomputer node, including regular inclusion of many-core devices. Monte Carlo transport, like all applications that run on supercomputers, will be forced to make significant changes to their designs in order to utilize these new architectures effectively. This dissertation presents solutions for central challenges that face Monte Carlo particle transport in this changing environment, specifically in the areas of threading models, tracking algorithms, tally data collection, and heterogenous load balancing. In addition, the dissertation culminates with a study that combines all of the presented techniques in a production application at scale on Lawrence Livermore National Laboratory's RZAnsel Supercomputer

    Technological developments allowing for the widespread clinical adoption of proton radiotherapy

    Get PDF
    External beam radiation therapy using accelerated protons has undergone significant development since the first patients were treated with accelerated protons in 1954. Widespread adoption of proton therapy is now taking place and is fully justified based on early clinical and technical research and development. Two of the main advantages of proton radiotherapy are improved healthy tissue sparing and increased dose conformation. The latter has been improved dramatically through the clinical realization of Pencil Beam Scanning (PBS). Other significant advancements in the past 30 years have also helped to establish proton radiotherapy as a major clinical modality in the cancer-fighting arsenal. Proton radiotherapy technologies are constantly evolving, and several major breakthroughs have been accomplished which could allow for a major revolution in proton therapy if clinically implemented. In this thesis, I will present research and innovative developments that I personally initiated or participated in that brought proton radiotherapy to its current state as well as my ongoing involvement in leading research and technological developments which will aid in the mass adoption of proton radiotherapy. These include beam dosimetry, patient positioning technologies, and creative methods that verify the Monte Carlo dose calculations which are now used in proton treatment planning. I will also discuss major technological advances concerning beam delivery that should be implemented clinically and new paradigms towards patient positioning. Many of these developments and technologies can benefit the cancer patient population worldwide and are now ready for mass clinical implementation. These developments will improve proton radiotherapy efficiencies and further reduce the cost of proton therapy facilities. This thesis therefore reflects my historical and ongoing efforts to meet market costs and time demands so that the clinical benefit of proton radiotherapy can be realized by a more significant fraction of cancer patients worldwide

    Development of tools for quality control on therapeutic carbon beams with a fast-MC code (FRED)

    Get PDF
    In the fight against tumors, different types of cancer require different ways of treatment: surgery, radiotherapy, chemotherapy, hormone therapy and immunotherapy often used in combination with each other. About 50% of cancer patients undergo radiotherapy treatment which exploits the ability of ionizing radiation to damage the genetic heritage of cancer cells, causing apoptosis and preventing their reproduction. The non-invasive nature of radiation represents a viable alternative for those tumors that are not surgically operable because they are localized in hardly reachable anatomical sites or on organs which removal would be too disabling for the patient. A new frontier of radiotherapy is represented by Particle Therapy (PT). It consists of the use of accelerated charged particle beams (in particular protons and carbon ions) to irradiate solid tumors. The main advantage of such a technique with respect to the standard radiotherapy using x-rays/electron beams is in the different longitudinal energy release profiles. While photons’ longitudinal dose release is characterized by a slow exponential decrease, for charged particles a sharp peak at the end of the path provides a more selective energy release. By conveniently controlling the peak position it is possible to concentrate the dose (expressed as the energy release per unit mass) to tumors and, at the same time, preserve surrounding healthy tissues. In particle therapy treatments, the achieved steep dose gradients demand highly accurate modelling of the interaction of beam particles with tissues. The high ballistic precision of hadrons may result in a superior delivered dose distribution compared to conventional radiotherapy only if accompanied by a precise patient positioning and highly accurate treatment planning. This second operation is performed by the Treatment Planning System (TPS), sophisticated software that provides position, intensity and direction of the beams to the accelerator control system. Nowadays one of the major issues related to the TPS based on Monte Carlo (MC) is the high computational time required to meet the demand for high accuracy. The code FRED (Fast paRticle thErapy Dose evaluator) has been developed to allow a fast optimization of treatment plans in proton therapy while profiting from the dose release accuracy of a MC tool. Within FRED, the proton interactions are described with the precision level available in leading-edge MC tools used for medical physics applications, with the advantage of reducing the simulation time up to a factor of 1000. In this way, it allows a MC plan recalculation in a few minutes on GPU (Graphics Processing Unit) cards, instead of several hours on CPU (Central Processing Unit) hardware. For the exceptional speed of the proton tracking algorithms implemented in FRED and for the excellent results achieved, the door to several applications within the particle therapy field has been opened. In particular, the success of FRED with protons determined the interest of CNAO (Centro Nazionale di Adroterapia Oncologica) center in Pavia to develop FRED also for carbon therapy applications, to recalculate treatment plans with carbon ions. Among the several differences between proton and carbon beams, the nuclear fragmentation of the projectile in a 12C treatment, which does not occur with protons, is certainly the most important. The simulation of the ion beam fragmentation gives an important contribution to the dose deposition. The total dose released is due not only to the primary beam but also to secondary and tertiary particles. Also for proton beams, there are secondary particles, mostly secondary protons from target fragmentation, which contribute on the level of some percent to the dose deposition for higher proton beam energies. However, fragments of the projectile, produced only by carbon beams, having on average the same energy per nucleon of the primary beam and a lower mass, can release dose after the peak causing the well-known fragmentation tail. This thesis is focused on the development of a fast-MC simulating the carbon treatment in particle therapy, with an entirely new nuclear interaction model of carbon on light target nuclei. The model has been developed to be implemented in the GPU based MC code, FRED. For this reason, in developing the algorithms the goal has been to balance accuracy, calculation time and GPU execution guidelines. In particular, maximum attention has been given to physical processes relevant for dose and RBE-weighted dose computation. Moreover, where possible, look-up tables have been implemented instead of performing an explicit calculation in view of the GPU implementation. Some aspects of the interaction of carbon ions with matter are analogous to the ones already used in FRED for proton beams. In particular, for ionization energy loss and multiple scattering, only a few adjustments were necessary. On the contrary, the nuclear model was built from scratch. The approach has been to develop the nuclear model parameterizing existent data and applying physical scaling in the energy range where the data are missing. The elastic cross-section has been obtained from ENDF/B-VII data while the calculation of the non-elastic cross-section was based on results reported on Tacheki, Zhang and Kox papers. Data used for the sampling of the combination of emitted fragments, energy and angle distributions, are relatives to the Dudouet and Divay experiments. To fill the gaps in the experimental data, an intercomparison between FRED and the full-MC FLUKA has been of help to check the adopted scaling. The model has been tested against the full-MC code FLUKA, commonly used in particle therapy, and then with two of the few experiments that it is possible to find in literature. The agreement with FLUKA is excellent, especially for lower energies

    Automation of the Monte Carlo simulation of medical linear accelerators

    Get PDF
    La consulta íntegra de la tesi, inclosos els articles no comunicats públicament per drets d'autor, es pot realitzar prèvia petició a l'Arxiu de la UPCThe main result of this thesis is a software system, called PRIMO, which simulates clinical linear accelerators and the subsequent dose distributions using the Monte Carlo method. PRIMO has the following features: (i) it is self- contained, that is, it does not require additional software libraries or coding; (ii) it includes a geometry library with most Varian and Elekta linacs; (iii) it is based on the general-purpose Monte Carlo code PENELOPE; (iv) it provides a suite of variance-reduction techniques and distributed parallel computing to enhance the simulation efficiency; (v) it is graphical user interfaced; and (vi) it is freely distributed through the website http://www.primoproject.net In order to endow PRIMO with these features the following tasks were conducted: - PRIMO was conceived with a layered structure. The topmost layer, named the GLASS, was developed in this thesis. The GLASS implements the GUI, drives all the functions of the system and performs the analysis of results. Lower layers generate geometry files, provide input data and execute the Monte Carlo simulation. - The geometry of Elekta linacs from series SU and MLCi were coded in the PRIMO system. - A geometrical model of the Varian True Beam linear accelerator was developed and validated. This model was created to surmount the limitations of the Varian distributed phase-space files and the absence of released information about the actual geometry of that machine. This geometry model was incorporated into PRIMO. - Two new variance-reduction techniques, named splitting roulette and selective splitting, were developed and validated. In a test made with an Elekta linac it was found that when both techniques are used in conjunction the simulation efficiency improves by a factor of up to 45. - A method to automatically distribute the simulation among the available CPU cores of a computer was implemented. The following investigations were done using PRIMO as a research tool : - The configu ration of the condensed history transport algorithm for charged particles in PENELOPE was optimized for linac simulation. Dose distributions in the patient were found to be particularly sensitive to the values of the transport parameters in the linac target. Use of inadequate values of these parameters may lead to an incorrect determination of the initial beam configuration or to biased dose distributions. - PRIMO was used to simulate phase-space files distributed by Varian for the True Beam linac. The results were compared with experimental data provided by five European radiotherapycenters. It was concluded thatthe latent variance and the accuracy of the phase-space files were adequate for the routine clinical practice. However, for research purposes where low statistical uncertainties are required the phase-space files are not large enough. To the best of our knowledge PRIMO is the only fully Monte Carlo-based linac and dose simulation system , addressed to research and dose verification, that does not require coding tasks from end users and is publicly available.El principal resultado de esta tesis es un sistema informático llamado PRIMO el cual simula aceleradores lineales médicos y las subsecuentes distribuciones de dosis empleando el método de Monte Carlo. PRIMO tiene las siguiente características: (i) es auto contenido, o sea no requiere de librerías de código ni de programación adicional ; (ii) incluye las geometrías de los principales modelos de aceleradores Varían y Elekta; (iii) está basado en el código Monte Carlo de propósitos generales PENELOPE; (iv) contiene un conjunto de técnicas de reducción de varianza y computación paralela distribuida para mejorar la eficiencia de simulación; (v) tiene una interfaz gráfica de usuario; y (vi) se distribuye gratis en el sitio web http://vvww.primoproject.net. Para dotar a PRIMO de esas características, se realizaron las tareas siguientes: - PRIMO se concibió con una estructura de capas. La capa superior, nombrada GLASS, fue desarrollada en esta tesis. GLASS implementa la interfazgráfica de usuario, controla todas las funciones del sistema y realiza el análisis de resultados. Las capas inferiores generan los archivos de geometría y otros datos de entrada y ejecutan la simulación Monte Carlo. - Se codificó en el sistema PRIMO la geometría de los aceleradores Elekta de las series SLi y MLC. - Se desarrolló y validó un modelo geométrico del acelerador TrueBeam de Varian. Este modelo fue creado para superar las limitaciones de los archivos de espacio de fase distribuidos por Varian, así como la ausencia de información sobre la geometría real de esta máquina. Este modelo geométrico fue incorporado en PRIMO. - Fueron desarrolladas y validadas dos nuevas técnicas de reducción de varianza nombradas splitting roulette y selective splitting. En pruebas hechas en un acelerador Elekta se encontró que cuando ambas técnicas se usan en combinación, la eficiencia de simulación mejora 45 veces. - Se implementó un método para distribuir la simulación entre los procesadores disponibles en un ordenador. Las siguientes investigaciones fueron realizadas usando PRIMO como herramienta: - Fue optimizada la configuración del algoritmo de PENELOPE para el transporte de partículas cargadas con historia condensada en la simulación del linac. Se encontró que las distribuciones de dosis en el paciente son particularmente sensibles a los valores de los parámetros de transporte usados para el target del linac. El uso de va lores inadecuados para esos parámetros puede conducir a una incorrecta determinación de la configuración del haz inicial o producir sesgos en las distribuciones de dosis. - Se utilizó PRIMO para simular archivos de espacios de fase distribuidos por Varian para el linac TrueBeam. Los resultados se compararon con datos experimentales aportados por cinco centros de radioterapia europeos. Se concluyó que la varianza latente y la exactitud de estos espacios de fase son adecuadas para la práctica clínica de rutina. Sin embargo estos espacios de fase no son suficientemente grandes para emplearse en investigaciones que requieren alcanzar una baja incertidumbre estadística. Hasta donde conocemos, PRIMO es el único sistema Monte Carlo que simula completamente el acelerador lineal y calcula la dosis absorbida, dirigido a la investigación y la verificación de dosis que no requiere del usuario tareas de codificación y está disponible públicamentePostprint (published version

    Proceedings Virtual Imaging Trials in Medicine 2024

    Get PDF
    This submission comprises the proceedings of the 1st Virtual Imaging Trials in Medicine conference, organized by Duke University on April 22-24, 2024. The listed authors serve as the program directors for this conference. The VITM conference is a pioneering summit uniting experts from academia, industry and government in the fields of medical imaging and therapy to explore the transformative potential of in silico virtual trials and digital twins in revolutionizing healthcare. The proceedings are categorized by the respective days of the conference: Monday presentations, Tuesday presentations, Wednesday presentations, followed by the abstracts for the posters presented on Monday and Tuesday
    corecore