94 research outputs found

    06311 Abstracts Collection -- Sensor Data and Information Fusion in Computer Vision and Medicine

    Get PDF
    From 30.07.06 to 04.08.06, the Dagstuhl Seminar 06311 ``Sensor Data and Information Fusion in Computer Vision and Medicine\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. Sensor data fusion is of increasing importance for many research fields and applications. Multi-modal imaging is routine in medicine, and in robitics it is common to use multi-sensor data fusion. During the seminar, researchers and application experts working in the field of sensor data fusion presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. The second part briefly summarizes the contributions

    06311 Executive Summary -- Sensor Data and Information Fusion in Computer Vision and Medicine

    Get PDF
    Today many technical systems are equipped with multiple sensors and information sources, like cameras, ultrasound sensors or web data bases. It is no problem to generate an exorbitantly large amount of data, but it is mostly unsolved how to take advantage of the expectation that the collected data provide more information than the sum of its parts. The design and analysis of algorithms for sensor data and information acquisition and fusion as well as the usage in a differentiated application field was the major focus of the Seminar held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. 24 researchers, practitioners, and application experts from different areas met to summarize the current state-of-the-art technology in data and information fusion, to discuss current research problems in fusion, and to envision future demands of this challenging research field. The considered application scenarios for data and information fusion were in the fields of computer vision and medicine

    Fast Iterative Reconstruction in MVCT

    Get PDF
    Statistical iterative reconstruction is expected to improve the image quality of computed tomography (CT). However, one of the challenges of iterative reconstruction is its large computational cost. The purpose of this review is to summarize a fast iterative reconstruction algorithm by optimizing reconstruction parameters. Megavolt projection data was acquired from a TomoTherapy system and reconstructed using in-house statistical iterative reconstruction algorithm. Total variation was used as the regularization term and the weight of the regularization term was determined by evaluating signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and visual assessment of spatial resolution using Gammex and Cheese phantoms. Gradient decent with an adaptive convergence parameter, ordered subset expectation maximization (OSEM), and CPU/GPU parallelization were applied in order to accelerate the present reconstruction algorithm. The SNR and CNR of the iterative reconstruction were several times better than that of filtered back projection (FBP). The GPU parallelization code combined with the OSEM algorithm reconstructed an image several hundred times faster than a CPU calculation. With 500 iterations, which provided good convergence, our method produced a 512 Ă— 512 pixel image within a few seconds. The image quality of the present algorithm was much better than that of FBP for patient data

    Intensity-based non-rigid registration using adaptive multilevel free-form deformation with an incompressibility constraint

    No full text
    Abstract. A major problem with non-rigid image registration techniques in many applications is their tendency to reduce the volume of contrast-enhancing structures [10]. Contrast enhancement is an intensity inconsistency, which is precisely what intensity-based registration algorithms are designed to minimize. Therefore, contrast-enhanced structures typically shrink substantially during registration, which affects the use of the resulting transformation for volumetric analysis, image subtraction, and multispectral classification. A common approach to address this problem is to constrain the deformation. In this paper we present a novel incompressibility constraint approach that is based on the Jacobian determinant of the deformation and can be computed rapidly. We apply our intensity-based non-rigid registration algorithm with this incompressibility constraint to two clinical applications (MR mammography, CT-DSA) and demonstrate that it produces high-quality deformations (as judged by visual assessment) while preserving the volume of contrast-enhanced structures.

    Shape-Based Averaging

    No full text
    Abstract—A new method for averaging multidimensional images is presented, which is based on signed Euclidean distance maps computed for each of the pixel values. We refer to the algorithm as “shape-based averaging ” (SBA) because of its similarity to Raya and Udupa’s shape-based interpolation method. The new method does not introduce pixel intensities that were not present in the input data, which makes it suitable for averaging nonnumerical data such as label maps (segmentations). Using segmented human brain magnetic resonance images, SBA is compared to label voting for the purpose of averaging image segmentations in a multiclassifier fashion. SBA, on average, performed as well as label voting in terms of recognition rates of the averaged segmentations. SBA produced more regular and contiguous structures with less fragmentation than did label voting. SBA also was more robust for small numbers of atlases and for low atlas resolutions, in particular, when combined with shape-based interpolation. We conclude that SBA improves the contiguity and accuracy of averaged image segmentations. Index Terms—Combination of segmentations, shape-based averaging (SBA), shape-based interpolation (SBI), signed Euclidean distance transform. I

    Model for defining and reporting Reference-based Validation Protocols in Medical Image Processing

    No full text
    International audienceObjectives. Image processing tools are often embedded in larger systems. Validation of image processing methods is important because the performance of such methods can have an impact on the performance of the larger systems and consequently on decisions and actions based on the use of these systems. Most validation studies compare the direct or indirect results of a method, with a reference that is assumed to be very close or equal to the correct solution. In this paper, we propose a model for defining and reporting reference-based validation protocols in medical image processing. Materials and Methods. The model was built using an ontological approach. Its components were identified from the analysis of initial publications (mainly reviews) on medical image processing, especially registration and segmentation, and from discussions with experts from the medical imaging community during international conferences and workshops. The model was validated by its instantiation for 38 selected papers that include a validation study, mainly for medical image registration and segmentation
    • …
    corecore