31 research outputs found
Compressed Sensing for Open-ended Waveguide Non-Destructive Testing and Evaluation
Ph. D. ThesisNon-destructive testing and evaluation (NDT&E) systems using open-ended waveguide (OEW) suffer from critical challenges. In the sensing stage, data acquisition is time-consuming by raster scan, which is difficult for on-line detection. Sensing stage also disregards demand for the latter feature extraction process, leading to an excessive amount of data and processing overhead for feature extraction. In the feature extraction stage, efficient and robust defect region segmentation in the obtained image is challenging for a complex image background. Compressed sensing (CS) demonstrates impressive data compression ability in various applications using sparse models. How to develop CS models in OEW NDT&E that jointly consider sensing & processing for fast data acquisition, data compression, efficient and robust feature extraction is remaining challenges.
This thesis develops integrated sensing-processing CS models to address the drawbacks in OEW NDT systems and carries out their case studies in low-energy impact damage detection for carbon fibre reinforced plastics (CFRP) materials. The major contributions are:
(1) For the challenge of fast data acquisition, an online CS model is developed to offer faster data acquisition and reduce data amount without any hardware modification. The images obtained with OEW are usually smooth which can be sparsely represented with discrete cosine transform (DCT) basis. Based on this information, a customised 0/1 Bernoulli matrix for CS measurement is designed for downsampling. The full data is reconstructed with orthogonal matching pursuit algorithm using the downsampling data, DCT basis, and the customised 0/1 Bernoulli matrix. It is hard to determine the sampling pixel numbers for sparse reconstruction when lacking training data, to address this issue, an accumulated sampling and recovery process is developed in this CS model. The defect region can be extracted with the proposed histogram threshold edge detection (HTED) algorithm after each recovery, which forms an online process. A case study in impact damage detection on CFRP materials is carried out for validation. The results show that the data acquisition time is reduced by one order of magnitude while maintaining equivalent image quality and defect region as raster scan.
(2) For the challenge of efficient data compression that considers the later feature extraction, a feature-supervised CS data acquisition method is proposed and evaluated. It reserves interested
features while reducing the data amount. The frequencies which reveal the feature only occupy a small part of the frequency band, this method finds these sparse frequency range firstly to supervise the later sampling process. Subsequently, based on joint sparsity of neighbour frame and the extracted frequency band, an aligned spatial-spectrum sampling scheme is proposed. The scheme only samples interested frequency range for required features by using a customised 0/1 Bernoulli measurement matrix. The interested spectral-spatial data are reconstructed jointly, which has much faster speed than frame-by-frame methods. The proposed feature-supervised CS data acquisition is implemented and compared with raster scan and the traditional CS reconstruction in impact damage detection on CFRP materials. The results show that the data amount is reduced greatly without compromising feature quality, and the gain in reconstruction speed is improved linearly with the number of measurements.
(3) Based on the above CS-based data acquisition methods, CS models are developed to directly detect defect from CS data rather than using the reconstructed full spatial data. This method is robust to texture background and more time-efficient that HTED algorithm. Firstly, based on the histogram is invariant to down-sampling using the customised 0/1 Bernoulli measurement matrix, a qualitative method which only gives binary judgement of defect is developed. High probability of detection and accuracy is achieved compared to other methods. Secondly, a new greedy algorithm of sparse orthogonal matching pursuit (spOMP)-based defect region segmentation method is developed to quantitatively extract the defect region, because the conventional sparse reconstruction algorithms cannot properly use the sparse character of correlation between the measurement matrix and CS data. The proposed algorithms are faster and more robust to interference than other algorithms.China Scholarship Counci
Sonar image interpretation for sub-sea operations
Mine Counter-Measure (MCM) missions are conducted to neutralise underwater
explosives. Automatic Target Recognition (ATR) assists operators by
increasing the speed and accuracy of data review. ATR embedded on vehicles
enables adaptive missions which increase the speed of data acquisition. This
thesis addresses three challenges; the speed of data processing, robustness of
ATR to environmental conditions and the large quantities of data required to
train an algorithm.
The main contribution of this thesis is a novel ATR algorithm. The algorithm
uses features derived from the projection of 3D boxes to produce a set of 2D
templates. The template responses are independent of grazing angle, range
and target orientation. Integer skewed integral images, are derived to accelerate
the calculation of the template responses. The algorithm is compared
to the Haar cascade algorithm. For a single model of sonar and cylindrical
targets the algorithm reduces the Probability of False Alarm (PFA) by 80%
at a Probability of Detection (PD) of 85%. The algorithm is trained on target
data from another model of sonar. The PD is only 6% lower even though no
representative target data was used for training.
The second major contribution is an adaptive ATR algorithm that uses local
sea-floor characteristics to address the problem of ATR robustness with
respect to the local environment. A dual-tree wavelet decomposition of the
sea-floor and an Markov Random Field (MRF) based graph-cut algorithm is
used to segment the terrain. A Neural Network (NN) is then trained to filter
ATR results based on the local sea-floor context. It is shown, for the Haar
Cascade algorithm, that the PFA can be reduced by 70% at a PD of 85%.
Speed of data processing is addressed using novel pre-processing techniques.
The standard three class MRF, for sonar image segmentation, is formulated
using graph-cuts. Consequently, a 1.2 million pixel image is segmented in
1.2 seconds. Additionally, local estimation of class models is introduced to
remove range dependent segmentation quality. Finally, an A* graph search
is developed to remove the surface return, a line of saturated pixels often
detected as false alarms by ATR. The A* search identifies the surface return
in 199 of 220 images tested with a runtime of 2.1 seconds. The algorithm is
robust to the presence of ripples and rocks
Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections
Gaussian noise injections (GNIs) are a family of simple and widely-used
regularisation methods for training neural networks, where one injects additive
or multiplicative Gaussian noise to the network activations at every iteration
of the optimisation algorithm, which is typically chosen as stochastic gradient
descent (SGD). In this paper we focus on the so-called `implicit effect' of
GNIs, which is the effect of the injected noise on the dynamics of SGD. We show
that this effect induces an asymmetric heavy-tailed noise on SGD gradient
updates. In order to model this modified dynamics, we first develop a
Langevin-like stochastic differential equation that is driven by a general
family of asymmetric heavy-tailed noise. Using this model we then formally
prove that GNIs induce an `implicit bias', which varies depending on the
heaviness of the tails and the level of asymmetry. Our empirical results
confirm that different types of neural networks trained with GNIs are
well-modelled by the proposed dynamics and that the implicit effect of these
injections induces a bias that degrades the performance of networks.Comment: Main paper of 12 pages, followed by appendi
Wavelet and Multiscale Methods
Various scientific models demand finer and finer resolutions of relevant features. Paradoxically, increasing computational power serves to even heighten this demand. Namely, the wealth of available data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information leads to tasks that are not tractable by standard numerical techniques. The last decade has seen the emergence of several new computational methodologies to address this situation. Their common features are the nonlinearity of the solution methods as well as the ability of separating solution characteristics living on different length scales. Perhaps the most prominent examples lie in multigrid methods and adaptive grid solvers for partial differential equations. These have substantially advanced the frontiers of computability for certain problem classes in numerical analysis. Other highly visible examples are: regression techniques in nonparametric statistical estimation, the design of universal estimators in the context of mathematical learning theory and machine learning; the investigation of greedy algorithms in complexity theory, compression techniques and encoding in signal and image processing; the solution of global operator equations through the compression of fully populated matrices arising from boundary integral equations with the aid of multipole expansions and hierarchical matrices; attacking problems in high spatial dimensions by sparse grid or hyperbolic wavelet concepts. This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computation and to promote the exchange of ideas emerging in various disciplines