9 research outputs found
Chest X-Ray Image Classification with Deep Learning
University of Technology Sydney. Faculty of Engineering and Information Technology.Computer-aided diagnosis (CAD) systems have been successfully helped to clinical diagnosis. This dissertation considers one essential task in CAD, the chest X-ray (CXR) image classification problem, with the deep learning technologies from the following three aspects.
First, considering most diseases existing in CXRs usually happen in small, localized areas, we propose to localize the local discriminative regions and integrate the global and local cues into an attention guided convolution neural network (AG-CNN) to identify thorax diseases. AG-CNN consists of three branches (global, local, and fusion branches). The global branch learns the global features for classification. The local branch localizes the discriminative regions, which avoids noise and improves misalignment in the global branch. AG-CNN fuses the global and local features for diagnosis in a fusion branch.
Second, due to the common and complex relationships of multiple diseases in CXRs, it is worth exploiting their correlations to help the diagnosis. This thesis will present a category-wise residual attention learning method to concentrate on learning the correlations of multiple diseases. It is expected to suppress the obstacles of irrelevant categories and strengthen the relevant features at the same time.
Last, a robust and stable CXR image analysis system should be able to: 1) automatically focus on the disease-critical regions, which usually are of small sizes; 2) adaptively capture the intrinsic relationships among different disease features and utilize them to boost the multi-label disease recognition rates jointly. We introduce a discriminative feature learning framework, ConsultNet, to achieve those two purposes simultaneously. ConsultNet consists of a variational selective information bottleneck branch and a spatial-and-channel encoding branch. These two branches learn discriminative features collaboratively.
In addition, each of the proposed methods is comprehensively verified and analysed by conducting various experiments
Object Discovery From a Single Unlabeled Image by Mining Frequent Itemset With Multi-scale Features
TThe goal of our work is to discover dominant objects in a very general
setting where only a single unlabeled image is given. This is far more
challenge than typical co-localization or weakly-supervised localization tasks.
To tackle this problem, we propose a simple but effective pattern mining-based
method, called Object Location Mining (OLM), which exploits the advantages of
data mining and feature representation of pre-trained convolutional neural
networks (CNNs). Specifically, we first convert the feature maps from a
pre-trained CNN model into a set of transactions, and then discovers frequent
patterns from transaction database through pattern mining techniques. We
observe that those discovered patterns, i.e., co-occurrence highlighted
regions, typically hold appearance and spatial consistency. Motivated by this
observation, we can easily discover and localize possible objects by merging
relevant meaningful patterns. Extensive experiments on a variety of benchmarks
demonstrate that OLM achieves competitive localization performance compared
with the state-of-the-art methods. We also evaluate our approach compared with
unsupervised saliency detection methods and achieves competitive results on
seven benchmark datasets. Moreover, we conduct experiments on fine-grained
classification to show that our proposed method can locate the entire object
and parts accurately, which can benefit to improving the classification results
significantly
Multiscale Single Image Dehazing Based on Adaptive Wavelet Fusion
Removing the haze effects on images or videos is a challenging and meaningful task for image processing and computer vision applications. In this paper, we propose a multiscale fusion method to remove the haze from a single image. Based on the existing dark channel prior and optics theory, two atmospheric veils with different scales are first derived from the hazy image. Then, a novel and adaptive local similarity-based wavelet fusion method is proposed for preserving the significant scene depth property and avoiding blocky artifacts. Finally, the clear haze-free image is restored by solving the atmospheric scattering model. Experimental results demonstrate that the proposed method can yield comparative or even better results than several state-of-the-art methods by subjective and objective evaluations
Thorax disease classification with attention guided convolutional neural network
10.1016/j.patrec.2019.11.040PATTERN RECOGNITION LETTERS13138-4