12 research outputs found

    Composition of the cross-dataset training and testing.

    No full text
    <p>*The annotations SH and DH are added to form the training set in DR1, summing 180 images due to the overlap.</p

    Regions of interest (dashed black regions) and the points of interest (blue circles).

    No full text
    <p>Points of interest falling within the regions marked by the specialist are considered for creating the class-aware codebook – half of the codebook is learned from local features sampled inside the regions marked as lesions, and half the codebook is learned from local features outside those regions.</p

    State of the art for the detection of bright lesions.

    No full text
    <p>*HEI-MED dataset.</p><p>**MESSIDOR dataset.</p><p>***ROC dataset.</p><p>****AUC obtained for training on HEI-MED dataset and test on Messidor dataset.</p

    The BoVW model illustrated as a matrix.

    No full text
    <p>The figure highlights the relationship between the low-level features <b>x</b><i><sub>j</sub></i>, the codewords <b>c</b><i><sub>m</sub></i> of the visual dictionary, the encoded features <i>α<sub>m</sub></i>, the coding function <i>f</i> and the pooling function <i>g</i>.</p

    Annotation occurrences for the three datasets.

    No full text
    <p>*“Red Lesion” is a more general annotation that encompasses both SH and DH, besides microaneurysms.</p><p>**The lesions do not sum to this value because an image can present several types of lesion at once.</p

    Standardized AUCs per lesion, for six combinations of feature extraction and coding (horizontal axis).

    No full text
    <p>In the box-plots (black), the whiskers show the range up to 1.5× the interquartile range, and outliers are shown as small circles. Averages (small squares) and 95%-confidence intervals (error bars) are also shown, in red, for the same data. The strong synergy between sparse feature extraction and semi-soft coding is evident: it has consistently improved results for all lesions, while the other combinations improve the results of some lesions at the cost of decreasing it for other lesions (as shown by the spread of the standardized effects in the vertical axis). This plot is based on a balanced design with the DR2 dataset and all lesions, the other balanced design with both datasets and two lesions show similar results.</p

    Pipeline of the BoVW-based automated diabetic retinopathy classification system for abnormal retinal images identification.

    No full text
    <p>For Low-Level feature extraction, the proposed method identifies Points of Interest (points in high-contrast or context-changing areas) within regions of interest which contain specific lesions marked and reviewed by specialists during the training of the method. For Codebook Learning (vocabulary creation), the method uses the k-means clustering method with Euclidean distance over a sample of the points of interest. The centroids are used as representatives codewords (the most important points of interests, for instance). With the codebook and the low-level description of a set of training images for a specific DR-related lesion, the Mid-Level feature extraction step employs the classical coding/pooling combination (hard/sum), that consists of defining a histogram that reveals the number of activations for each visual word in each analyzed image. For the Decision model training, the current method requires the training of one decision model for normal vs. bright lesions, normal vs. red lesions, or normal vs. multi-lesion classification.</p
    corecore