31,057 research outputs found
Multi-level Feature Fusion-based CNN for Local Climate Zone Classification from Sentinel-2 Images: Benchmark Results on the So2Sat LCZ42 Dataset
As a unique classification scheme for urban forms and functions, the local
climate zone (LCZ) system provides essential general information for any
studies related to urban environments, especially on a large scale. Remote
sensing data-based classification approaches are the key to large-scale mapping
and monitoring of LCZs. The potential of deep learning-based approaches is not
yet fully explored, even though advanced convolutional neural networks (CNNs)
continue to push the frontiers for various computer vision tasks. One reason is
that published studies are based on different datasets, usually at a regional
scale, which makes it impossible to fairly and consistently compare the
potential of different CNNs for real-world scenarios. This study is based on
the big So2Sat LCZ42 benchmark dataset dedicated to LCZ classification. Using
this dataset, we studied a range of CNNs of varying sizes. In addition, we
proposed a CNN to classify LCZs from Sentinel-2 images, Sen2LCZ-Net. Using this
base network, we propose fusing multi-level features using the extended
Sen2LCZ-Net-MF. With this proposed simple network architecture and the highly
competitive benchmark dataset, we obtain results that are better than those
obtained by the state-of-the-art CNNs, while requiring less computation with
fewer layers and parameters. Large-scale LCZ classification examples of
completely unseen areas are presented, demonstrating the potential of our
proposed Sen2LCZ-Net-MF as well as the So2Sat LCZ42 dataset. We also
intensively investigated the influence of network depth and width and the
effectiveness of the design choices made for Sen2LCZ-Net-MF. Our work will
provide important baselines for future CNN-based algorithm developments for
both LCZ classification and other urban land cover land use classification
Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment
We present a deep neural network-based approach to image quality assessment
(IQA). The network is trained end-to-end and comprises ten convolutional layers
and five pooling layers for feature extraction, and two fully connected layers
for regression, which makes it significantly deeper than related IQA models.
Unique features of the proposed architecture are that: 1) with slight
adaptations it can be used in a no-reference (NR) as well as in a
full-reference (FR) IQA setting and 2) it allows for joint learning of local
quality and local weights, i.e., relative importance of local quality to the
global quality estimate, in an unified framework. Our approach is purely
data-driven and does not rely on hand-crafted features or other types of prior
domain knowledge about the human visual system or image statistics. We evaluate
the proposed approach on the LIVE, CISQ, and TID2013 databases as well as the
LIVE In the wild image quality challenge database and show superior performance
to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation
shows a high ability to generalize between different databases, indicating a
high robustness of the learned features
An Interpretable Deep Hierarchical Semantic Convolutional Neural Network for Lung Nodule Malignancy Classification
While deep learning methods are increasingly being applied to tasks such as
computer-aided diagnosis, these models are difficult to interpret, do not
incorporate prior domain knowledge, and are often considered as a "black-box."
The lack of model interpretability hinders them from being fully understood by
target users such as radiologists. In this paper, we present a novel
interpretable deep hierarchical semantic convolutional neural network (HSCNN)
to predict whether a given pulmonary nodule observed on a computed tomography
(CT) scan is malignant. Our network provides two levels of output: 1) low-level
radiologist semantic features, and 2) a high-level malignancy prediction score.
The low-level semantic outputs quantify the diagnostic features used by
radiologists and serve to explain how the model interprets the images in an
expert-driven manner. The information from these low-level tasks, along with
the representations learned by the convolutional layers, are then combined and
used to infer the high-level task of predicting nodule malignancy. This unified
architecture is trained by optimizing a global loss function including both
low- and high-level tasks, thereby learning all the parameters within a joint
framework. Our experimental results using the Lung Image Database Consortium
(LIDC) show that the proposed method not only produces interpretable lung
cancer predictions but also achieves significantly better results compared to
common 3D CNN approaches
- …