3,285 research outputs found

    Current and Future Trends in Magnetic Resonance Imaging Assessments of the Response of Breast Tumors to Neoadjuvant Chemotherapy

    Get PDF
    The current state-of-the-art assessment of treatment response in breast cancer is based on the response evaluation criteria in solid tumors (RECIST). RECIST reports on changes in gross morphology and divides response into one of four categories. In this paper we highlight how dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion-weighted MRI (DW-MRI) may be able to offer earlier, and more precise, information on treatment response in the neoadjuvant setting than RECIST. We then describe how longitudinal registration of breast images and the incorporation of intelligent bioinformatics approaches with imaging data have the potential to increase the sensitivity of assessing treatment response. We conclude with a discussion of the potential benefits of breast MRI at the higher field strength of 3T. For each of these areas, we provide a review, illustrative examples from clinical trials, and offer insights into future research directions

    Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained Deep Neural Networks

    Full text link
    Quantitative assessment of the abdominal region from clinically acquired CT scans requires the simultaneous segmentation of abdominal organs. Thanks to the availability of high-performance computational resources, deep learning-based methods have resulted in state-of-the-art performance for the segmentation of 3D abdominal CT scans. However, the complex characterization of organs with fuzzy boundaries prevents the deep learning methods from accurately segmenting these anatomical organs. Specifically, the voxels on the boundary of organs are more vulnerable to misprediction due to the highly-varying intensity of inter-organ boundaries. This paper investigates the possibility of improving the abdominal image segmentation performance of the existing 3D encoder-decoder networks by leveraging organ-boundary prediction as a complementary task. To address the problem of abdominal multi-organ segmentation, we train the 3D encoder-decoder network to simultaneously segment the abdominal organs and their corresponding boundaries in CT scans via multi-task learning. The network is trained end-to-end using a loss function that combines two task-specific losses, i.e., complete organ segmentation loss and boundary prediction loss. We explore two different network topologies based on the extent of weights shared between the two tasks within a unified multi-task framework. To evaluate the utilization of complementary boundary prediction task in improving the abdominal multi-organ segmentation, we use three state-of-the-art encoder-decoder networks: 3D UNet, 3D UNet++, and 3D Attention-UNet. The effectiveness of utilizing the organs' boundary information for abdominal multi-organ segmentation is evaluated on two publically available abdominal CT datasets. A maximum relative improvement of 3.5% and 3.6% is observed in Mean Dice Score for Pancreas-CT and BTCV datasets, respectively.Comment: 15 pages, 16 figures, journal pape

    3-D lung deformation and function from respiratory-gated 4-D x-ray CT images : application to radiation treatment planning.

    Get PDF
    Many lung diseases or injuries can cause biomechanical or material property changes that can alter lung function. While the mechanical changes associated with the change of the material properties originate at a regional level, they remain largely asymptomatic and are invisible to global measures of lung function until they have advanced significantly and have aggregated. In the realm of external beam radiation therapy of patients suffering from lung cancer, determination of patterns of pre- and post-treatment motion, and measures of regional and global lung elasticity and function are clinically relevant. In this dissertation, we demonstrate that 4-D CT derived ventilation images, including mechanical strain, provide an accurate and physiologically relevant assessment of regional pulmonary function which may be incorporated into the treatment planning process. Our contributions are as follows: (i) A new volumetric deformable image registration technique based on 3-D optical flow (MOFID) has been designed and implemented which permits the possibility of enforcing physical constraints on the numerical solutions for computing motion field from respiratory-gated 4-D CT thoracic images. The proposed optical flow framework is an accurate motion model for the thoracic CT registration problem. (ii) A large displacement landmark-base elastic registration method has been devised for thoracic CT volumetric image sets containing large deformations or changes, as encountered for example in registration of pre-treatment and post-treatment images or multi-modality registration. (iii) Based on deformation maps from MOFIO, a novel framework for regional quantification of mechanical strain as an index of lung functionality has been formulated for measurement of regional pulmonary function. (iv) In a cohort consisting of seven patients with non-small cell lung cancer, validation of physiologic accuracy of the 4-0 CT derived quantitative images including Jacobian metric of ventilation, Vjac, and principal strains, (V?1, V?2, V?3, has been performed through correlation of the derived measures with SPECT ventilation and perfusion scans. The statistical correlations with SPECT have shown that the maximum principal strain pulmonary function map derived from MOFIO, outperforms all previously established ventilation metrics from 40-CT. It is hypothesized that use of CT -derived ventilation images in the treatment planning process will help predict and prevent pulmonary toxicity due to radiation treatment. It is also hypothesized that measures of regional and global lung elasticity and function obtained during the course of treatment may be used to adapt radiation treatment. Having objective methods with which to assess pre-treatment global and regional lung function and biomechanical properties, the radiation treatment dose can potentially be escalated to improve tumor response and local control

    BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields

    Full text link
    Neural rendering combines ideas from classical computer graphics and machine learning to synthesize images from real-world observations. NeRF, short for Neural Radiance Fields, is a recent innovation that uses AI algorithms to create 3D objects from 2D images. By leveraging an interpolation approach, NeRF can produce new 3D reconstructed views of complicated scenes. Rather than directly restoring the whole 3D scene geometry, NeRF generates a volumetric representation called a ``radiance field,'' which is capable of creating color and density for every point within the relevant 3D space. The broad appeal and notoriety of NeRF make it imperative to examine the existing research on the topic comprehensively. While previous surveys on 3D rendering have primarily focused on traditional computer vision-based or deep learning-based approaches, only a handful of them discuss the potential of NeRF. However, such surveys have predominantly focused on NeRF's early contributions and have not explored its full potential. NeRF is a relatively new technique continuously being investigated for its capabilities and limitations. This survey reviews recent advances in NeRF and categorizes them according to their architectural designs, especially in the field of novel view synthesis.Comment: 22 page, 1 figure, 5 tabl

    Neural Radiance Fields: Past, Present, and Future

    Full text link
    The various aspects like modeling and interpreting 3D environments and surroundings have enticed humans to progress their research in 3D Computer Vision, Computer Graphics, and Machine Learning. An attempt made by Mildenhall et al in their paper about NeRFs (Neural Radiance Fields) led to a boom in Computer Graphics, Robotics, Computer Vision, and the possible scope of High-Resolution Low Storage Augmented Reality and Virtual Reality-based 3D models have gained traction from res with more than 1000 preprints related to NeRFs published. This paper serves as a bridge for people starting to study these fields by building on the basics of Mathematics, Geometry, Computer Vision, and Computer Graphics to the difficulties encountered in Implicit Representations at the intersection of all these disciplines. This survey provides the history of rendering, Implicit Learning, and NeRFs, the progression of research on NeRFs, and the potential applications and implications of NeRFs in today's world. In doing so, this survey categorizes all the NeRF-related research in terms of the datasets used, objective functions, applications solved, and evaluation criteria for these applications.Comment: 413 pages, 9 figures, 277 citation
    corecore