1,398 research outputs found

    Computed Tomography in the Modern Slaughterhouse

    Get PDF

    Accurate,robust and harmonized implementation of morpho-functional imaging in treatment planning for personalized radiotherapy

    Get PDF
    In this work we present a methodology able to use harmonized PET/CT imaging in dose painting by number (DPBN) approach by means of a robust and accurate treatment planning system. Image processing and treatment planning were performed by using a Matlab-based platform, called CARMEN, in which a full Monte Carlo simulation is included. Linear programming formulation was developed for a voxel-by-voxel robust optimization and a specific direct aperture optimization was designed for an efficient adaptive radiotherapy implementation. DPBN approach with our methodology was tested to reduce the uncertainties associated with both, the absolute value and the relative value of the information in the functional image. For the same H&N case, a single robust treatment was planned for dose prescription maps corresponding to standardized uptake value distributions from two different image reconstruction protocols: One to fulfill EARL accreditation for harmonization of [18F]FDG PET/CT image, and the other one to use the highest available spatial resolution. Also, a robust treatment was planned to fulfill dose prescription maps corresponding to both approaches, the dose painting by contour based on volumes and our voxel-by-voxel DPBN. Adaptive planning was also carried out to check the suitability of our proposal. Different plans showed robustness to cover a range of scenarios for implementation of harmonizing strategies by using the highest available resolution. Also, robustness associated to discretization level of dose prescription according to the use of contours or numbers was achieved. All plans showed excellent quality index histogram and quality factors below 2%. Efficient solution for adaptive radiotherapy based directly on changes in functional image was obtained. We proved that by using voxel-by-voxel DPBN approach it is possible to overcome typical drawbacks linked to PET/CT images, providing to the clinical specialist confidence enough for routinely implementation of functional imaging for personalized radiotherapy.Junta de Andalucía (FISEVI, reference project CTS 2482)European Regional Development Fund (FEDER

    Enhanced Digital Breast Tomosynthesis diagnosis using 3D visualization and automatic classification of lesions

    Get PDF
    Breast cancer represents the main cause of cancer-related deaths in women. Nonetheless, the mortality rate of this disease has been decreasing over the last three decades, largely due to the screening programs for early detection. For many years, both screening and clinical diagnosis were mostly done through Digital Mammography (DM). Approved in 2011, Digital Breast Tomosynthesis (DBT) is similar to DM but it allows a 3D reconstruction of the breast tissue, which helps the diagnosis by reducing the tissue overlap. Currently, DBT is firmly established and is approved as a stand-alone modality to replace DM. The main objective of this thesis is to develop computational tools to improve the visualization and interpretation of DBT data. Several methods for an enhanced visualization of DBT data through volume rendering were studied and developed. Firstly, important rendering parameters were considered. A new approach for automatic generation of transfer functions was implemented and two other parameters that highly affect the quality of volume rendered images were explored: voxel size in Z direction and sampling distance. Next, new image processing methods that improve the rendering quality by considering the noise regularization and the reduction of out-of-plane artifacts were developed. The interpretation of DBT data with automatic detection of lesions was approached through artificial intelligence methods. Several deep learning Convolutional Neural Networks (CNNs) were implemented and trained to classify a complete DBT image for the presence or absence of microcalcification clusters (MCs). Then, a faster R-CNN (region-based CNN) was trained to detect and accurately locate the MCs in the DBT images. The detected MCs were rendered with the developed 3D rendering software, which provided an enhanced visualization of the volume of interest. The combination of volume visualization with lesion detection may, in the future, improve both diagnostic accuracy and also reduce analysis time. This thesis promotes the development of new computational imaging methods to increase the diagnostic value of DBT, with the aim of assisting radiologists in their task of analyzing DBT volumes and diagnosing breast cancer

    Evaluation of Interpolation Effects on Upsampling and Accuracy of Cost Functions-Based Optimized Automatic Image Registration

    Get PDF
    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method

    Multimodal breast imaging: Registration, visualization, and image synthesis

    Get PDF
    The benefit of registration and fusion of functional images with anatomical images is well appreciated in the advent of combined positron emission tomography and x-ray computed tomography scanners (PET/CT). This is especially true in breast cancer imaging, where modalities such as high-resolution and dynamic contrast-enhanced magnetic resonance imaging (MRI) and F-18-FDG positron emission tomography (PET) have steadily gained acceptance in addition to x-ray mammography, the primary detection tool. The increased interest in combined PET/MRI images has facilitated the demand for appropriate registration and fusion algorithms. A new approach to MRI-to-PET non-rigid breast image registration was developed and evaluated based on the location of a small number of fiducial skin markers (FSMs) visible in both modalities. The observed FSM displacement vectors between MRI and PET, distributed piecewise linearly over the breast volume, produce a deformed Finite-Element mesh that reasonably approximates non-rigid deformation of the breast tissue between the MRI and PET scans. The method does not require a biomechanical breast tissue model, and is robust and fast. The method was evaluated both qualitatively and quantitatively on patients and a deformable breast phantom. The procedure yields quality images with average target registration error (TRE) below 4 mm. The importance of appropriately jointly displaying (i.e. fusing) the registered images has often been neglected and underestimated. A combined MRI/PET image has the benefits of directly showing the spatial relationships between the two modalities, increasing the sensitivity, specificity, and accuracy of diagnosis. Additional information on morphology and on dynamic behavior of the suspicious lesion can be provided, allowing more accurate lesion localization including mapping of hyper- and hypo-metabolic regions as well as better lesion-boundary definition, improving accuracy when grading the breast cancer and assessing the need for biopsy. Eight promising fusion-for-visualization techniques were evaluated by radiologists from University Hospital, in Syracuse, NY. Preliminary results indicate that the radiologists were better able to perform a series of tasks when reading the fused PET/MRI data sets using color tables generated by a newly developed genetic algorithm, as compared to other commonly used schemes. The lack of a known ground truth hinders the development and evaluation of new algorithms for tasks such as registration and classification. A preliminary mesh-based breast phantom containing 12 distinct tissue classes along with tissue properties necessary for the simulation of dynamic positron emission tomography scans was created. The phantom contains multiple components which can be separately manipulated, utilizing geometric transformations, to represent populations or a single individual being imaged in multiple positions. This phantom will support future multimodal breast imaging work

    Three--dimensional medical imaging: Algorithms and computer systems

    Get PDF
    This paper presents an introduction to the field of three-dimensional medical imaging It presents medical imaging terms and concepts, summarizes the basic operations performed in three-dimensional medical imaging, and describes sample algorithms for accomplishing these operations. The paper contains a synopsis of the architectures and algorithms used in eight machines to render three-dimensional medical images, with particular emphasis paid to their distinctive contributions. It compares the performance of the machines along several dimensions, including image resolution, elapsed time to form an image, imaging algorithms used in the machine, and the degree of parallelism used in the architecture. The paper concludes with general trends for future developments in this field and references on three-dimensional medical imaging

    Lung nodule modeling and detection for computerized image analysis of low dose CT imaging of the chest.

    Get PDF
    From a computerized image analysis prospective, early diagnosis of lung cancer involves detection of doubtful nodules and classification into different pathologies. The detection stage involves a detection approach, usually by template matching, and an authentication step to reduce false positives, usually conducted by a classifier of one form or another; statistical, fuzzy logic, support vector machines approaches have been tried. The classification stage matches, according to a particular approach, the characteristics (e.g., shape, texture and spatial distribution) of the detected nodules to common characteristics (again, shape, texture and spatial distribution) of nodules with known pathologies (confirmed by biopsies). This thesis focuses on the first step; i.e., nodule detection. Specifically, the thesis addresses three issues: a) understanding the CT data of typical low dose CT (LDCT) scanning of the chest, and devising an image processing approach to reduce the inherent artifacts in the scans; b) devising an image segmentation approach to isolate the lung tissues from the rest of the chest and thoracic regions in the CT scans; and c) devising a nodule modeling methodology to enhance the detection rate and lend benefits for the ultimate step in computerized image analysis of LDCT of the lungs, namely associating a pathology to the detected nodule. The methodology for reducing the noise artifacts is based on noise analysis and examination of typical LDCT scans that may be gathered on a repetitive fashion; since, a reduction in the resolution is inevitable to avoid excessive radiation. Two optimal filtering methods are tested on samples of the ELCAP screening data; the Weiner and the Anisotropic Diffusion Filters. Preference is given to the Anisotropic Diffusion Filter, which can be implemented on 7x7 blocks/windows of the CT data. The methodology for lung segmentation is based on the inherent characteristics of the LDCT scans, shown as distinct bi-modal gray scale histogram. A linear model is used to describe the histogram (the joint probability density function of the lungs and non-lungs tissues) by a linear combination of weighted kernels. The Gaussian kernels were chosen, and the classic Expectation-Maximization (EM) algorithm was employed to estimate the marginal probability densities of the lungs and non-lungs tissues, and select an optimal segmentation threshold. The segmentation is further enhanced using standard shape analysis based on mathematical morphology, which improves the continuity of the outer and inner borders of the lung tissues. This approach (a preliminary version of it appeared in [14]) is found to be adequate for lung segmentation as compared to more sophisticated approaches developed at the CVIP Lab (e.g., [15][16]) and elsewhere. The methodology developed for nodule modeling is based on understanding the physical characteristics of the nodules in LDCT scans, as identified by human experts. An empirical model is introduced for the probability density of the image intensity (or Hounsfield units) versus the radial distance measured from the centroid – center of mass - of typical nodules. This probability density showed that the nodule spatial support is within a circle/square of size 10 pixels; i.e., limited to 5 mm in length; which is within the range that the radiologist specify to be of concern. This probability density is used to fill in the intensity (or Hounsfield units) of parametric nodule models. For these models (e.g., circles or semi-circles), given a certain radius, we calculate the intensity (or Hounsfield units) using an exponential expression for the radial distance with parameters specified from the histogram of an ensemble of typical nodules. This work is similar in spirit to the earlier work of Farag et al., 2004 and 2005 [18][19], except that the empirical density of the radial distance and the histogram of typical nodules provide a data-driven guide for estimating the intensity (or Hounsfield units) of the nodule models. We examined the sensitivity and specificity of parametric nodules in a template-matching framework for nodule detection. We show that false positives are inevitable problems with typical machine learning methods of automatic lung nodule detection, which invites further efforts and perhaps fresh thinking into automatic nodule detection. A new approach for nodule modeling is introduced in Chapter 5 of this thesis, which brings high promise in both the detection, and the classification of nodules. Using the ELCAP study, we created an ensemble of four types of nodules and generated a nodule model for each type based on optimal data reduction methods. The resulting nodule model, for each type, has lead to drastic improvements in the sensitivity and specificity of nodule detection. This approach may be used as well for classification. In conclusion, the methodologies in this thesis are based on understanding the LDCT scans and what is to be expected in terms of image quality. Noise reduction and image segmentation are standard. The thesis illustrates that proper nodule models are possible and indeed a computerized approach for image analysis to detect and classify lung nodules is feasible. Extensions to the results in this thesis are immediate and the CVIP Lab has devised plans to pursue subsequent steps using clinical data
    corecore