77,854 research outputs found
Automatic Diagnosis of Myocarditis Disease in Cardiac MRI Modality using Deep Transformers and Explainable Artificial Intelligence
Myocarditis is a significant cardiovascular disease (CVD) that poses a threat
to the health of many individuals by causing damage to the myocardium. The
occurrence of microbes and viruses, including the likes of HIV, plays a crucial
role in the development of myocarditis disease (MCD). The images produced
during cardiac magnetic resonance imaging (CMRI) scans are low contrast, which
can make it challenging to diagnose cardiovascular diseases. In other hand,
checking numerous CMRI slices for each CVD patient can be a challenging task
for medical doctors. To overcome the existing challenges, researchers have
suggested the use of artificial intelligence (AI)-based computer-aided
diagnosis systems (CADS). The presented paper outlines a CADS for the detection
of MCD from CMR images, utilizing deep learning (DL) methods. The proposed CADS
consists of several steps, including dataset, preprocessing, feature
extraction, classification, and post-processing. First, the Z-Alizadeh dataset
was selected for the experiments. Subsequently, the CMR images underwent
various preprocessing steps, including denoising, resizing, as well as data
augmentation (DA) via CutMix and MixUp techniques. In the following, the most
current deep pre-trained and transformer models are used for feature extraction
and classification on the CMR images. The findings of our study reveal that
transformer models exhibit superior performance in detecting MCD as opposed to
pre-trained architectures. In terms of DL architectures, the Turbulence Neural
Transformer (TNT) model exhibited impressive accuracy, reaching 99.73%
utilizing a 10-fold cross-validation approach. Additionally, to pinpoint areas
of suspicion for MCD in CMRI images, the Explainable-based Grad Cam method was
employed
On the segmentation and classification of hand radiographs
This research is part of a wider project to build predictive models of bone age using hand radiograph images. We examine ways of finding the outline of a hand from an X-ray as the first stage in segmenting the image into constituent bones. We assess a variety of algorithms including contouring, which has not previously been used in this context. We introduce a novel ensemble algorithm for combining outlines using two voting schemes, a likelihood ratio test and dynamic time warping (DTW). Our goal is to minimize the human intervention required, hence we investigate alternative ways of training a classifier to determine whether an outline is in fact correct or not. We evaluate outlining and classification on a set of 1370 images. We conclude that ensembling with DTW improves performance of all outlining algorithms, that the contouring algorithm used with the DTW ensemble performs the best of those assessed, and that the most effective classifier of hand outlines assessed is a random forest applied to outlines transformed into principal components
Predictive Modelling of Bone Age through Classification and Regression of Bone Shapes
Bone age assessment is a task performed daily in hospitals worldwide. This involves a clinician estimating the age of a patient from a radiograph of the non-dominant hand. Our approach to automated bone age assessment is to modularise the algorithm into the following three stages: segment and verify hand outline; segment and verify bones; use the bone outlines to construct models of age. In this paper we address the final question: given outlines of bones, can we learn how to predict the bone age of the patient? We examine two alternative approaches. Firstly, we attempt to train classifiers on individual bones to predict the bone stage categories commonly used in bone ageing. Secondly, we construct regression models to directly predict patient age. We demonstrate that models built on summary features of the bone outline perform better than those built using the one dimensional representation of the outline, and also do at least as well as other automated systems. We show that models constructed on just three bones are as accurate at predicting age as expert human assessors using the standard technique. We also demonstrate the utility of the model by quantifying the importance of ethnicity and sex on age development. Our conclusion is that the feature based system of separating the image processing from the age modelling is the best approach for automated bone ageing, since it offers flexibility and transparency and produces accurate estimate
Transformation Based Ensembles for Time Series Classification
Until recently, the vast majority of data mining time series classification (TSC) research has focused on alternative distance measures for 1-Nearest Neighbour (1-NN) classifiers based on either the raw data, or on compressions or smoothing of the raw data. Despite the extensive evidence in favour of 1-NN classifiers with Euclidean or Dynamic Time Warping distance, there has also been a flurry of recent research publications proposing classification algorithms for TSC. Generally, these classifiers describe different ways of incorporating summary measures in the time domain into more complex classifiers. Our hypothesis is that the easiest way to gain improvement on TSC problems is simply to transform into an alternative data space where the discriminatory features are more easily detected. To test our hypothesis, we perform a range of benchmarking experiments in the time domain, before evaluating nearest neighbour classifiers on data transformed into the power spectrum, the autocorrelation function, and the principal component space. We demonstrate that on some problems there is dramatic improvement in the accuracy of classifiers built on the transformed data over classifiers built in the time domain, but that there is also a wide variance in accuracy for a particular classifier built on different data transforms. To overcome this variability, we propose a simple transformation based ensemble, then demonstrate that it improves performance and reduces the variability of classifiers built in the time domain only. Our advice to a practitioner with a real world TSC problem is to try transforms before developing a complex classifier; it is the easiest way to get a potentially large increase in accuracy, and may provide further insights into the underlying relationships that characterise the problem
DART: Distribution Aware Retinal Transform for Event-based Cameras
We introduce a generic visual descriptor, termed as distribution aware
retinal transform (DART), that encodes the structural context using log-polar
grids for event cameras. The DART descriptor is applied to four different
problems, namely object classification, tracking, detection and feature
matching: (1) The DART features are directly employed as local descriptors in a
bag-of-features classification framework and testing is carried out on four
standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS,
NCaltech-101). (2) Extending the classification system, tracking is
demonstrated using two key novelties: (i) For overcoming the low-sample problem
for the one-shot learning of a binary classifier, statistical bootstrapping is
leveraged with online learning; (ii) To achieve tracker robustness, the scale
and rotation equivariance property of the DART descriptors is exploited for the
one-shot learning. (3) To solve the long-term object tracking problem, an
object detector is designed using the principle of cluster majority voting. The
detection scheme is then combined with the tracker to result in a high
intersection-over-union score with augmented ground truth annotations on the
publicly available event camera dataset. (4) Finally, the event context encoded
by DART greatly simplifies the feature correspondence problem, especially for
spatio-temporal slices far apart in time, which has not been explicitly tackled
in the event-based vision domain.Comment: 12 pages, revision submitted to TPAMI in Nov 201
Decoding Complex Imagery Hand Gestures
Brain computer interfaces (BCIs) offer individuals suffering from major
disabilities an alternative method to interact with their environment.
Sensorimotor rhythm (SMRs) based BCIs can successfully perform control tasks;
however, the traditional SMR paradigms intuitively disconnect the control and
real task, making them non-ideal for complex control scenarios. In this study,
we design a new, intuitively connected motor imagery (MI) paradigm using
hierarchical common spatial patterns (HCSP) and context information to
effectively predict intended hand grasps from electroencephalogram (EEG) data.
Experiments with 5 participants yielded an aggregate classification
accuracy--intended grasp prediction probability--of 64.5\% for 8 different hand
gestures, more than 5 times the chance level.Comment: This work has been submitted to EMBC 201
- …