40 research outputs found
UOLO - automatic object detection and segmentation in biomedical images
We propose UOLO, a novel framework for the simultaneous detection and
segmentation of structures of interest in medical images. UOLO consists of an
object segmentation module which intermediate abstract representations are
processed and used as input for object detection. The resulting system is
optimized simultaneously for detecting a class of objects and segmenting an
optionally different class of structures. UOLO is trained on a set of bounding
boxes enclosing the objects to detect, as well as pixel-wise segmentation
information, when available. A new loss function is devised, taking into
account whether a reference segmentation is accessible for each training image,
in order to suitably backpropagate the error. We validate UOLO on the task of
simultaneous optic disc (OD) detection, fovea detection, and OD segmentation
from retinal images, achieving state-of-the-art performance on public datasets.Comment: Publised on DLMIA 2018. Licensed under the Creative Commons
CC-BY-NC-ND 4.0 license: http://creativecommons.org/licenses/by-nc-nd/4.0
Multidataset Incremental Training for Optic Disc Segmentation
When convolutional neural networks are applied to image
segmentation results depend greatly on the data sets used to train the
networks. Cloud providers support multi GPU and TPU virtual machines
making the idea of cloud-based segmentation as service attractive. In this
paper we study the problem of building a segmentation service, where
images would come from different acquisition instruments, by training a
generalized U-Net with images from a single or several datasets. We also
study the possibility of training with a single instrument and perform
quick retrains when more data is available. As our example we perform
segmentation of Optic Disc in fundus images which is useful for glau coma diagnosis. We use two publicly available data sets (RIM-One V3,
DRISHTI) for individual, mixed or incremental training. We show that
multidataset or incremental training can produce results that are simi lar to those published by researchers who use the same dataset for both
training and validation
Accurate, fast, data efficient and interpretable glaucoma diagnosis with automated spatial analysis of the whole cup to disc profile
Background: Glaucoma is the leading cause of irreversible blindness worldwide. It is a heterogeneous group of conditions with a common optic neuropathy and associated loss of peripheral vision. Both over and under-diagnosis carry high costs in terms of healthcare spending and preventable blindness. The characteristic clinical feature of glaucoma is asymmetrical optic nerve rim narrowing, which is difficult for humans to quantify reliably. Strategies to improve and automate optic disc assessment are therefore needed to prevent sight loss.
Methods: We developed a novel glaucoma detection algorithm that segments and analyses colour photographs to quantify optic nerve rim consistency around the whole disc at 15-degree intervals. This provides a profile of the cup/disc ratio, in contrast to the vertical cup/disc ratio in common use. We introduce a spatial probabilistic model, to account for the optic nerve shape, we then use this model to derive a disc deformation index and a decision rule for glaucoma. We tested our algorithm on two separate image datasets (ORIGA and RIM-ONE).
Results: The spatial algorithm accurately distinguished glaucomatous and healthy discs on internal and external validation (AUROC 99.6% and 91.0% respectively). It achieves this using a dataset 100-times smaller than that required for deep learning algorithms, is flexible to the type of cup and disc segmentation (automated or semi-automated), utilises images with missing data, and is correlated with the disc size (p = 0.02) and the rim-to-disc at the narrowest rim (p<0.001, in external validation).
Discussion: The spatial probabilistic algorithm is highly accurate, highly data efficient and it extends to any imaging hardware in which the boundaries of cup and disc can be segmented, thus making the algorithm particularly applicable to research into disease mechanisms, and also glaucoma screening in low resource settings
Optic Disc and Fovea Localisation in Ultra-widefield Scanning Laser Ophthalmoscope Images Captured in Multiple Modalities
We propose a convolutional neural network for localising the centres of the optic disc (OD) and fovea in ultra-wide field of view scanning laser ophthalmoscope (UWFoV-SLO) images of the retina. Images captured in both reflectance and autofluorescence (AF) modes, and central pole and eyesteered gazes, were used. The method achieved an OD localisation accuracy of 99.4% within one OD radius, and fovea localisation accuracy of 99.1% within one OD radius on a test set comprising of 1790 images. The performance of fovea localisation in AF images was comparable to the variation between human annotators at this task. The laterality of the image (whether the image is of the left or right eye) was inferred from the OD and fovea coordinates with an accuracy of 99.9%
Micromechanical Properties of Injection-Molded Starch–Wood Particle Composites
The micromechanical properties of injection molded starch–wood particle composites were investigated as a function of particle content and humidity conditions.
The composite materials were characterized by scanning electron microscopy and X-ray diffraction methods. The microhardness
of the composites was shown to increase notably with the concentration of the wood particles. In addition,creep behavior under the indenter and temperature dependence
were evaluated in terms of the independent contribution of the starch matrix and the wood microparticles to the hardness value. The influence of drying time on the density
and weight uptake of the injection-molded composites was highlighted. The results revealed the role of the mechanism of water evaporation, showing that the dependence of water uptake and temperature was greater for the starch–wood composites than for the pure starch sample. Experiments performed during the drying process at 70°C indicated that
the wood in the starch composites did not prevent water loss from the samples.Peer reviewe
Deep Learning Models for Automatic Makeup Detection
Makeup can disguise facial features, which results in degradation in the performance of many facial-related analysis systems, including face recognition, facial landmark characterisation, aesthetic quantification and automated age estimation methods. Thus, facial makeup is likely to directly affect several real-life applications such as cosmetology and virtual cosmetics recommendation systems, security and access control, and social interaction. In this work, we conduct a comparative study and design automated facial makeup detection systems leveraging multiple learning schemes from a single unconstrained photograph. We have investigated and studied the efficacy of deep learning models for makeup detection incorporating the use of transfer learning strategy with semi-supervised learning using labelled and unlabelled data. First, during the supervised learning, the VGG16 convolution neural network, pre-trained on a large dataset, is fine-tuned on makeup labelled data. Secondly, two unsupervised learning methods, which are self-learning and convolutional auto-encoder, are trained on unlabelled data and then incorporated with supervised learning during semi-supervised learning. Comprehensive experiments and comparative analysis have been conducted on 2479 labelled images and 446 unlabelled images collected from six challenging makeup datasets. The obtained results reveal that the convolutional auto-encoder merged with supervised learning gives the best makeup detection performance achieving an accuracy of 88.33% and area under ROC curve of 95.15%. The promising results obtained from conducted experiments reveal and reflect the efficiency of combining different learning strategies by harnessing labelled and unlabelled data. It would also be advantageous to the beauty industry to develop such computational intelligence methods.</jats:p
Enhancement of medical images using fuzzy logic
Image enhancement is one of the most critical subjects in computer vision and image processing fields. It can be considered as means to enrich the perception of images for human viewers. All kinds of images typically suffer from different problems such as weak contrast and noise. The primary purpose of image enhancement is to change an image's visual appearance. Many algorithms have recently been proposed for enhancing medical images. Image enhancement is still deemed a challenging task. In this paper, the fuzzy c-means clustering (FCM) technique is utilized to enhance the medical images. The method of enhancement consists of two stages. The proposed algorithm conducts a cluster test on the image pixels. It then increases the difference of gray level between the diverse objects to accomplish the enhancement purpose of the medical images. The experimental results have been tested using various images. The algorithm enhanced the small target of the image to a reasonable limit and revealed favorable performance. The results of image enhancement techniques were evaluated by using terms of different criteria such as peak signal to noise ratio (PSNR), mean square error (MSE) and average information contents (AIC), showing promising performance.</jats:p