26,527 research outputs found
Intelligent Neurtosophic Diagnostic System for Cardiotcography data
Cardiotocography data uncertainty is a critical task for the classification in biomedical field. Constructing good and efficient
classifier via machine learning algorithms is necessary to help doctors in diagnosing the state of fetus heart rate. *e proposed neutrosophic diagnostic system is an Interval Neutrosophic Rough Neural Network framework based on the backpropagation algorithm. It benefits from the advantages of neutrosophic set theory not only to improve the performance of rough neural networks but also to achieve a better performance than the other algorithms. *e experimental results visualize the data using the boxplot for better understanding of attribute distribution. *e performance measurement of the confusion matrix for the proposed framework is 95.1, 94.95, 95.2, and 95.1 concerning accuracy rate, precision, recall, and F1-score, respectively. WEKA application is used to analyse cardiotocography data performance measurement of different algorithms, e.g., neural network, decision table, the nearest neighbor, and rough neural network. *e comparison with other algorithms shows that the proposed framework is both feasible and efficient classifier. Additionally, the receiver operation characteristic curve displays the proposed framework classifications of the pathologic, normal, and suspicious states by 0.93, 0.90, and 0.85 areas that are considered high and acceptable under the curve, respectively. Improving the performance measurements of the proposed framework by removing ineffective attributes via feature selection would be suitable advancement in the future. Moreover, the proposed framework can also be used in various real-life problems such as classification of coronavirus, social media, and satellite image
Dense semantic labeling of sub-decimeter resolution images with convolutional neural networks
Semantic labeling (or pixel-level land-cover classification) in ultra-high
resolution imagery (< 10cm) requires statistical models able to learn high
level concepts from spatial data, with large appearance variations.
Convolutional Neural Networks (CNNs) achieve this goal by learning
discriminatively a hierarchy of representations of increasing abstraction.
In this paper we present a CNN-based system relying on an
downsample-then-upsample architecture. Specifically, it first learns a rough
spatial map of high-level representations by means of convolutions and then
learns to upsample them back to the original resolution by deconvolutions. By
doing so, the CNN learns to densely label every pixel at the original
resolution of the image. This results in many advantages, including i)
state-of-the-art numerical accuracy, ii) improved geometric accuracy of
predictions and iii) high efficiency at inference time.
We test the proposed system on the Vaihingen and Potsdam sub-decimeter
resolution datasets, involving semantic labeling of aerial images of 9cm and
5cm resolution, respectively. These datasets are composed by many large and
fully annotated tiles allowing an unbiased evaluation of models making use of
spatial information. We do so by comparing two standard CNN architectures to
the proposed one: standard patch classification, prediction of local label
patches by employing only convolutions and full patch labeling by employing
deconvolutions. All the systems compare favorably or outperform a
state-of-the-art baseline relying on superpixels and powerful appearance
descriptors. The proposed full patch labeling CNN outperforms these models by a
large margin, also showing a very appealing inference time.Comment: Accepted in IEEE Transactions on Geoscience and Remote Sensing, 201
- …