9,145 research outputs found

    An Overview of Rendering from Volume Data --- including Surface and Volume Rendering

    Get PDF
    Volume rendering is a title often ambiguously used in science. One meaning often quoted is: `to render any three volume dimensional data set'; however, within this categorisation `surface rendering'' is contained. Surface rendering is a technique for visualising a geometric representation of a surface from a three dimensional volume data set. A more correct definition of Volume Rendering would only incorporate the direct visualisation of volumes, without the use of intermediate surface geometry representations. Hence we state: `Volume Rendering is the Direct Visualisation of any three dimensional Volume data set; without the use of an intermediate geometric representation for isosurfaces'; `Surface Rendering is the Visualisation of a surface, from a geometric approximation of an isosurface, within a Volume data set'; where an isosurface is a surface formed from a cross connection of data points, within a volume, of equal value or density. This paper is an overview of both Surface Rendering and Volume Rendering techniques. Surface Rendering mainly consists of contouring lines over data points and triangulations between contours. Volume rendering methods consist of ray casting techniques that allow the ray to be cast from the viewing plane into the object and the transparency, opacity and colour calculated for each cell; the rays are often cast until an opaque object is `hit' or the ray exits the volume

    Lesion boundary segmentation using level set methods

    Get PDF
    This paper addresses the issue of accurate lesion segmentation in retinal imagery, using level set methods and a novel stopping mechanism - an elementary features scheme. Specifically, the curve propagation is guided by a gradient map built using a combination of histogram equalization and robust statistics. The stopping mechanism uses elementary features gathered as the curve deforms over time, and then using a lesionness measure, defined herein, ’looks back in time’ to find the point at which the curve best fits the real object. We implement the level set using a fast upwind scheme and compare the proposed method against five other segmentation algorithms performed on 50 randomly selected images of exudates with a database of clinician marked-up boundaries as ground truth

    Intima-Media Thickness: Setting a Standard for a Completely Automated Method of Ultrasound Measurement

    Get PDF
    The intima - media thickness (IMT) of the common carotid artery is a widely used clinical marker of severe cardiovascular diseases. IMT is usually manually measured on longitudinal B-Mode ultrasound images. Many computer-based techniques for IMT measurement have been proposed to overcome the limits of manual segmentation. Most of these, however, require a certain degree of user interaction. In this paper we describe a new completely automated layers extraction (CALEXia) technique for the segmentation and IMT measurement of carotid wall in ultrasound images. CALEXia is based on an integrated approach consisting of feature extraction, line fitting, and classification that enables the automated tracing of the carotid adventitial walls. IMT is then measured by relying on a fuzzy K-means classifier. We tested CALEXia on a database of 200 images. We compared CALEXia performances to those of a previously developed methodology that was based on signal analysis (CULEXsa). Three trained operators manually segmented the images and the average profiles were considered as the ground truth. The average error from CALEXia for lumen - intima (LI) and media - adventitia (MA) interface tracings were 1.46 ± 1.51 pixel (0.091 ± 0.093 mm) and 0.40 ± 0.87 pixel (0.025 ± 0.055 mm), respectively. The corresponding errors for CULEXsa were 0.55 ± 0.51 pixels (0.035 ± 0.032 mm) and 0.59 ± 0.46 pixels (0.037 ± 0.029 mm). The IMT measurement error was equal to 0.87 ± 0.56 pixel (0.054 ± 0.035 mm) for CALEXia and 0.12 ± 0.14 pixel (0.01 ± 0.01 mm) for CULEXsa. Thus, CALEXia showed limited performance in segmenting the LI interface, but outperformed CULEXsa in the MA interface and in the number of images correctly processed (10 for CALEXia and 16 for CULEXsa). Based on two complementary strategies, we anticipate fusing them for further IMT improvement

    An optimisation of a freeform lens design for LED street lighting

    Get PDF

    Enhancement of dronogram aid to visual interpretation of target objects via intuitionistic fuzzy hesitant sets

    Get PDF
    In this paper, we address the hesitant information in enhancement task often caused by differences in image contrast. Enhancement approaches generally use certain filters which generate artifacts or are unable to recover all the objects details in images. Typically, the contrast of an image quantifies a unique ratio between the amounts of black and white through a single pixel. However, contrast is better represented by a group of pix- els. We have proposed a novel image enhancement scheme based on intuitionistic hesi- tant fuzzy sets (IHFSs) for drone images (dronogram) to facilitate better interpretations of target objects. First, a given dronogram is divided into foreground and background areas based on an estimated threshold from which the proposed model measures the amount of black/white intensity levels. Next, we fuzzify both of them and determine the hesitant score indicated by the distance between the two areas for each point in the fuzzy plane. Finally, a hyperbolic operator is adopted for each membership grade to improve the pho- tographic quality leading to enhanced results via defuzzification. The proposed method is tested on a large drone image database. Results demonstrate better contrast enhancement, improved visual quality, and better recognition compared to the state-of-the-art methods.Web of Science500866
    • …
    corecore