4 research outputs found

    A New Application for Displaying and Fusing Multimodal Data Sets

    Get PDF
    A recently developed, freely available, application specifically designed for the visualization of multimodal data sets is presented. The application allows multiple 3D data sets such as CT (x-ray computer tomography), MRI (magnetic resonance imaging), PET (positron emission tomography), and SPECT (single photon emission tomography) of the same subject to be viewed simultaneously. This is done by maintaining synchronization of the spatial location viewed within all modalities, and by providing fused views of the data where multiple data sets are displayed as a single volume. Different options for the fused views are provided by plug-ins. Plug-ins typically used include color-overlays and interlacing, but more complex plug-ins such as those based on different color spaces, and component analysis techniques are also supported. Corrections for resolution differences and user preference of contrast and brightness are made. Pre-defined and custom color tables can be used to enhance the viewing experience. In addition to these essential capabilities, multiple options are provided for mapping 16-bit data sets onto an 8-bit display, including windowing, automatically and dynamically defined tone transfer functions, and histogram based techniques. The 3D data sets can be viewed not only as a stack of images, but also as the preferred three orthogonal cross sections through the volume. More advanced volumetric displays of both individual data sets and fused views are also provided. This includes the common MIP (maximum intensity projection) both with and without depth correction for both individual data sets and multimodal data sets created using a fusion plug-in

    Multimodal display techniques with application to breast imaging

    Get PDF
    Application of a multimodality approach is advantageous for detection, diagnosis and management of breast cancer. In this context, F-18-FDG positron emission tomography (PET), and high-resolution and dynamic contrast-enhanced magnetic resonance imaging (MRI) have steadily gained clinical acceptance. Obtaining the spatial relationships between these modalities and conveying them to the observer maximizes the benefit that can be achieved. Traditionally the registered images are displayed side by side. However, it is believed that a combined MRI/PET display may be more beneficial. The advantage of a combined image lies in our inability to visually judge spatial relationships between images when they are viewed side by side. The process of combining the MRI and PET 3D images into a single 3D image is called image fusion. Color tables were defined for the fusion of MRI/PET images. This included color tables, which satisfy specific requirements, that were generated by a previously developed genetic algorithm. Radiologists were asked to evaluate images created using the selected fusion-for-visualization techniques. The study determined radiologists’ preference, ease of use, understanding, efficiency, and accuracy when reading images using each technique. The data studied, the data collected, the applications used to administer the study and analyze the results, and the processed results are provided through this interactive document

    Genetic Algorithm Automated Generation of Multivariate Color Tables for Visualization of Multimodal Medical Data Sets

    No full text
    In many applications there is a need to visualize multidimensional data sets. Using spatial relationships alone, display is limited to two or three dimensions. The use of color, in addition to spatial relationships, increases the dimensionality of the data that can be effectively visualized. Use of color is usually achieved through the application of color tables. The generation of color tables is not an easy task, and since color space is relatively large it is nearly impossible for an individual to consider all of the possible options. A genetic algorithm was developed to automate this task, generating color tables for the joint display of high-resolution and dynamic contrast-enhanced magnetic resonance imaging and F-18-FDG positron emission tomography data sets. The results are promising, producing new color tables that meet defined requirements. 1

    Multimodal breast imaging: Registration, visualization, and image synthesis

    Get PDF
    The benefit of registration and fusion of functional images with anatomical images is well appreciated in the advent of combined positron emission tomography and x-ray computed tomography scanners (PET/CT). This is especially true in breast cancer imaging, where modalities such as high-resolution and dynamic contrast-enhanced magnetic resonance imaging (MRI) and F-18-FDG positron emission tomography (PET) have steadily gained acceptance in addition to x-ray mammography, the primary detection tool. The increased interest in combined PET/MRI images has facilitated the demand for appropriate registration and fusion algorithms. A new approach to MRI-to-PET non-rigid breast image registration was developed and evaluated based on the location of a small number of fiducial skin markers (FSMs) visible in both modalities. The observed FSM displacement vectors between MRI and PET, distributed piecewise linearly over the breast volume, produce a deformed Finite-Element mesh that reasonably approximates non-rigid deformation of the breast tissue between the MRI and PET scans. The method does not require a biomechanical breast tissue model, and is robust and fast. The method was evaluated both qualitatively and quantitatively on patients and a deformable breast phantom. The procedure yields quality images with average target registration error (TRE) below 4 mm. The importance of appropriately jointly displaying (i.e. fusing) the registered images has often been neglected and underestimated. A combined MRI/PET image has the benefits of directly showing the spatial relationships between the two modalities, increasing the sensitivity, specificity, and accuracy of diagnosis. Additional information on morphology and on dynamic behavior of the suspicious lesion can be provided, allowing more accurate lesion localization including mapping of hyper- and hypo-metabolic regions as well as better lesion-boundary definition, improving accuracy when grading the breast cancer and assessing the need for biopsy. Eight promising fusion-for-visualization techniques were evaluated by radiologists from University Hospital, in Syracuse, NY. Preliminary results indicate that the radiologists were better able to perform a series of tasks when reading the fused PET/MRI data sets using color tables generated by a newly developed genetic algorithm, as compared to other commonly used schemes. The lack of a known ground truth hinders the development and evaluation of new algorithms for tasks such as registration and classification. A preliminary mesh-based breast phantom containing 12 distinct tissue classes along with tissue properties necessary for the simulation of dynamic positron emission tomography scans was created. The phantom contains multiple components which can be separately manipulated, utilizing geometric transformations, to represent populations or a single individual being imaged in multiple positions. This phantom will support future multimodal breast imaging work
    corecore