7 research outputs found
Deep Network for Simultaneous Decomposition and Classification in UWB-SAR Imagery
Classifying buried and obscured targets of interest from other natural and
manmade clutter objects in the scene is an important problem for the U.S. Army.
Targets of interest are often represented by signals captured using
low-frequency (UHF to L-band) ultra-wideband (UWB) synthetic aperture radar
(SAR) technology. This technology has been used in various applications,
including ground penetration and sensing-through-the-wall. However, the
technology still faces a significant issues regarding low-resolution SAR
imagery in this particular frequency band, low radar cross sections (RCS),
small objects compared to radar signal wavelengths, and heavy interference. The
classification problem has been firstly, and partially, addressed by sparse
representation-based classification (SRC) method which can extract noise from
signals and exploit the cross-channel information. Despite providing potential
results, SRC-related methods have drawbacks in representing nonlinear relations
and dealing with larger training sets. In this paper, we propose a Simultaneous
Decomposition and Classification Network (SDCN) to alleviate noise inferences
and enhance classification accuracy. The network contains two jointly trained
sub-networks: the decomposition sub-network handles denoising, while the
classification sub-network discriminates targets from confusers. Experimental
results show significant improvements over a network without decomposition and
SRC-related methods
Machine Learning Approaches for Semantic Segmentation on Partly-Annotated Medical Images
Semantic segmentation of medical images plays a crucial role in assisting medical practitioners in providing accurate and swift diagnoses; nevertheless, deep neural networks require extensive labelled data to learn and generalise appropriately. This is a major issue in medical imagery because most of the datasets are not fully annotated. Training models with partly-annotated datasets generate plenty of predictions that belong to correct unannotated areas that are categorised as false positives; as a result, standard segmentation metrics and objective functions do not work correctly, affecting the overall performance of the models. In this thesis, the semantic segmentation of partly-annotated medical datasets is extensively and thoroughly studied. The general objective is to improve the segmentation results of medical images via innovative supervised and semi-supervised approaches. The main contributions of this work are the following. Firstly, a new metric, specifically designed for this kind of dataset, can provide a reliable score to partly-annotated datasets with positive expert feedback in their generated predictions by exploiting all the confusion matrix values except the false positives. Secondly, an innovative approach to generating better pseudo-labels when applying co-training with the disagreement selection strategy. This method expands the pixels in disagreement utilising the combined predictions as a guide. Thirdly, original attention mechanisms based on disagreement are designed for two cases: intra-model and inter-model. These attention modules leverage the disagreement between layers (from the same or different model instances) to enhance the overall learning process and generalisation of the models. Lastly, innovative deep supervision methods improve the segmentation results by training neural networks one subnetwork at a time following the order of the supervision branches. The methods are thoroughly evaluated on several histopathological datasets showing significant improvements