51,319 research outputs found
Supervised machine learning based multi-task artificial intelligence classification of retinopathies
Artificial intelligence (AI) classification holds promise as a novel and
affordable screening tool for clinical management of ocular diseases. Rural and
underserved areas, which suffer from lack of access to experienced
ophthalmologists may particularly benefit from this technology. Quantitative
optical coherence tomography angiography (OCTA) imaging provides excellent
capability to identify subtle vascular distortions, which are useful for
classifying retinovascular diseases. However, application of AI for
differentiation and classification of multiple eye diseases is not yet
established. In this study, we demonstrate supervised machine learning based
multi-task OCTA classification. We sought 1) to differentiate normal from
diseased ocular conditions, 2) to differentiate different ocular disease
conditions from each other, and 3) to stage the severity of each ocular
condition. Quantitative OCTA features, including blood vessel tortuosity (BVT),
blood vascular caliber (BVC), vessel perimeter index (VPI), blood vessel
density (BVD), foveal avascular zone (FAZ) area (FAZ-A), and FAZ contour
irregularity (FAZ-CI) were fully automatically extracted from the OCTA images.
A stepwise backward elimination approach was employed to identify sensitive
OCTA features and optimal-feature-combinations for the multi-task
classification. For proof-of-concept demonstration, diabetic retinopathy (DR)
and sickle cell retinopathy (SCR) were used to validate the supervised machine
leaning classifier. The presented AI classification methodology is applicable
and can be readily extended to other ocular diseases, holding promise to enable
a mass-screening platform for clinical deployment and telemedicine.Comment: Supplemental material attached at the en
Detection of Hard Exudates in Retinal Fundus Images using Deep Learning
Diabetic Retinopathy (DR) is a retinal disorder that affects the people
having diabetes mellitus for a long time (20 years). DR is one of the main
reasons for the preventable blindness all over the world. If not detected early
the patient may progress to severe stages of irreversible blindness. Lack of
Ophthalmologists poses a serious problem for the growing diabetes patients. It
is advised to develop an automated DR screening system to assist the
Ophthalmologist in decision making. Hard exudates develop when DR is present.
It is important to detect hard exudates in order to detect DR in an early
stage. Research has been done to detect hard exudates using regular image
processing techniques and Machine Learning techniques. Here, a deep learning
algorithm has been presented in this paper that detects hard exudates in fundus
images of the retina.Comment: 5 Pages, 3 figures, 2 tables, International Conference on Systems,
Computation, Automation and Networking http://icscan.in
An Interpretable Machine Vision Approach to Human Activity Recognition using Photoplethysmograph Sensor Data
The current gold standard for human activity recognition (HAR) is based on
the use of cameras. However, the poor scalability of camera systems renders
them impractical in pursuit of the goal of wider adoption of HAR in mobile
computing contexts. Consequently, researchers instead rely on wearable sensors
and in particular inertial sensors. A particularly prevalent wearable is the
smart watch which due to its integrated inertial and optical sensing
capabilities holds great potential for realising better HAR in a non-obtrusive
way. This paper seeks to simplify the wearable approach to HAR through
determining if the wrist-mounted optical sensor alone typically found in a
smartwatch or similar device can be used as a useful source of data for
activity recognition. The approach has the potential to eliminate the need for
the inertial sensing element which would in turn reduce the cost of and
complexity of smartwatches and fitness trackers. This could potentially
commoditise the hardware requirements for HAR while retaining the functionality
of both heart rate monitoring and activity capture all from a single optical
sensor. Our approach relies on the adoption of machine vision for activity
recognition based on suitably scaled plots of the optical signals. We take this
approach so as to produce classifications that are easily explainable and
interpretable by non-technical users. More specifically, images of
photoplethysmography signal time series are used to retrain the penultimate
layer of a convolutional neural network which has initially been trained on the
ImageNet database. We then use the 2048 dimensional features from the
penultimate layer as input to a support vector machine. Results from the
experiment yielded an average classification accuracy of 92.3%. This result
outperforms that of an optical and inertial sensor combined (78%) and
illustrates the capability of HAR systems using...Comment: 26th AIAI Irish Conference on Artificial Intelligence and Cognitive
Scienc
- …