3,827 research outputs found
Total focussing method for volumetric imaging in immersion non destructive evaluation
This paper describes the use of a 550 (25x22) element 2MHz 2D piezoelectric composite array in immersion mode to image an aluminum test block containing a collection of artificial defects. The defects included a 1mm diameter side-drilled hole, a collection of 1mm slot defects with varying degrees of skew to the normal and a flat bottomed hole. The data collection was carried out using the full matrix capture; a scanning procedure was developed to allow the operation of the large element count array through a conventional 64-channel phased array controller. A 3D TFM algorithm capable of imaging in a dual media environment was implemented in MATLAB for the offline processing the raw scan data. This algorithm facilitates the creation of 3D images of defects while accounting for refraction effects at material boundaries. In each of the test samples interrogated the defects, and their spatial position, are readily identified using TFM. Defect directional information has been characterized using VTFM for defect exhibiting angles up to and including 45o of skew
Interpreting Neural Networks Using Flip Points
Neural networks have been criticized for their lack of easy interpretation,
which undermines confidence in their use for important applications. Here, we
introduce a novel technique, interpreting a trained neural network by
investigating its flip points. A flip point is any point that lies on the
boundary between two output classes: e.g. for a neural network with a binary
yes/no output, a flip point is any input that generates equal scores for "yes"
and "no". The flip point closest to a given input is of particular importance,
and this point is the solution to a well-posed optimization problem. This paper
gives an overview of the uses of flip points and how they are computed. Through
results on standard datasets, we demonstrate how flip points can be used to
provide detailed interpretation of the output produced by a neural network.
Moreover, for a given input, flip points enable us to measure confidence in the
correctness of outputs much more effectively than softmax score. They also
identify influential features of the inputs, identify bias, and find changes in
the input that change the output of the model. We show that distance between an
input and the closest flip point identifies the most influential points in the
training data. Using principal component analysis (PCA) and rank-revealing QR
factorization (RR-QR), the set of directions from each training input to its
closest flip point provides explanations of how a trained neural network
processes an entire dataset: what features are most important for
classification into a given class, which features are most responsible for
particular misclassifications, how an adversary might fool the network, etc.
Although we investigate flip points for neural networks, their usefulness is
actually model-agnostic
Mary Kenneth Keller: First US PhD in Computer Science
The first two doctoral-level degrees in Computer Science in the US were
awarded in June 1965. This paper discusses one of the degree recipients, Sister
Mary Kenneth Keller, BVM
- …