159,444 research outputs found

    Adaptive transfer functions: improved multiresolution visualization of medical models

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.Peer ReviewedPostprint (author's final draft

    MIRACLE’s Naive Approach to Medical Images Annotation

    Full text link
    One of the proposed tasks of the ImageCLEF 2005 campaign has been an Automatic Annotation Task. The objective is to provide the classification of a given set of 1,000 previously unseen medical (radiological) images according to 57 predefined categories covering different medical pathologies. 9,000 classified training images are given which can be used in any way to train a classifier. The Automatic Annotation task uses no textual information, but image-content information only. This paper describes our participation in the automatic annotation task of ImageCLEF 2005

    Automatic annotation of X-ray images: a study on attribute selection

    Get PDF
    Advances in the medical imaging technology has lead to an exponential growth in the number of digital images that need to be acquired, analyzed, classified, stored and retrieved in medical centers. As a result, medical image classification and retrieval has recently gained high interest in the scientific community. Despite several attempts, the proposed solutions are still far from being sufficiently accurate for real-life implementations. In a previous work, performance of different feature types were investigated in a SVM-based learning framework for classification. of X-Ray images into classes corresponding to body parts and local binary patterns were observed to outperform others. In this paper, we extend that work by exploring the effect of attribute selection on the classification performance. Our experiments show that principal component analysis based attribute selection manifests prediction values that are comparable to the baseline (all-features case) with considerably smaller subsets of original features, inducing lower processing times and reduced storage space

    A Community-Driven Validation Service for Standard Medical Imaging Objects

    Get PDF
    Digital medical imaging laboratories contain many distinct types of equipment provided by different manufacturers. Interoperability is a critical issue and the DICOM protocol is a de facto standard in those environments. However, manufacturers' implementation of the standard may have non-conformities at several levels, which will hinder systems' integration. Moreover, medical staff may be responsible for data inconsistencies when entering data. Those situations severely affect the quality of healthcare services since they can disrupt system operations. The existence of software able to confirm data quality and compliance with the DICOM standard is important for programmers, IT staff and healthcare technicians. Although there are a few solutions that try to accomplish this goal, they are unable to deal with certain situations that require user input. Furthermore, these cases usually require the setup of a working environment, which makes the sharing of validation information more difficult. This article proposes and describes the development of a Web DICOM validation service for the community. This solution requires no configuration by the user, promotes validation results share-ability in the community and preserves patient data privacy since files are de-identified on the client side.Comment: Computer Standards & Interfaces, 201

    Conditional weighted universal source codes: second order statistics in universal coding

    Get PDF
    We consider the use of second order statistics in two-stage universal source coding. Examples of two-stage universal codes include the weighted universal vector quantization (WUVQ), weighted universal bit allocation (WUBA), and weighted universal transform coding (WUTC) algorithms. The second order statistics are incorporated in two-stage universal source codes in a manner analogous to the method by which second order statistics are incorporated in entropy constrained vector quantization (ECVQ) to yield conditional ECVQ (CECVQ). In this paper, we describe an optimal two-stage conditional entropy constrained universal source code along with its associated optimal design algorithm and a fast (but nonoptimal) variation of the original code. The design technique and coding algorithm here presented result in a new family of conditional entropy constrained universal codes including but not limited to the conditional entropy constrained WUVQ (CWUVQ), the conditional entropy constrained WUBA (CWUBA), and the conditional entropy constrained WUTC (CWUTC). The fast variation of the conditional entropy constrained universal codes allows the designer to trade off performance gains against storage and delay costs. We demonstrate the performance of the proposed codes on a collection of medical brain scans. On the given data set, the CWUVQ achieves up to 7.5 dB performance improvement over variable-rate WUVQ and up to 12 dB performance improvement over ECVQ. On the same data set, the fast variation of the CWUVQ achieves identical performance to that achieved by the original code at all but the lowest rates (less than 0.125 bits per pixel)
    corecore