10,818 research outputs found

    Depth estimation of inner wall defects by means of infrared thermography

    Get PDF
    There two common methods dealing with interpreting data from infrared thermography: qualitatively and quantitatively. On a certain condition, the first method would be sufficient, but for an accurate interpretation, one should undergo the second one. This report proposes a method to estimate the defect depth quantitatively at an inner wall of petrochemical furnace wall. Finite element method (FEM) is used to model multilayer walls and to simulate temperature distribution due to the existence of the defect. Five informative parameters are proposed for depth estimation purpose. These parameters are the maximum temperature over the defect area (Tmax-def), the average temperature at the right edge of the defect (Tavg-right), the average temperature at the left edge of the defect (Tavg-left), the average temperature at the top edge of the defect (Tavg-top), and the average temperature over the sound area (Tavg-so). Artificial Neural Network (ANN) was trained with these parameters for estimating the defect depth. Two ANN architectures, Multi Layer Perceptron (MLP) and Radial Basis Function (RBF) network were trained for various defect depths. ANNs were used to estimate the controlled and testing data. The result shows that 100% accuracy of depth estimation was achieved for the controlled data. For the testing data, the accuracy was above 90% for the MLP network and above 80% for the RBF network. The results showed that the proposed informative parameters are useful for the estimation of defect depth and it is also clear that ANN can be used for quantitative interpretation of thermography data

    Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations

    Get PDF
    The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this paper, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space (RKHS), showing that it contains a large class of convolutional neural networks with homogeneous activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model. In addition to models in the constructed RKHS, our stability analysis also applies to convolutional networks with generic activations such as rectified linear units, and we discuss its relationship with recent generalization bounds based on spectral norms

    Invariant set of weight of perceptron trained by perceptron training algorithm

    Get PDF
    In this paper, an invariant set of the weight of the perceptron trained by the perceptron training algorithm is defined and characterized. The dynamic range of the steady state values of the weight of the perceptron can be evaluated via finding the dynamic range of the weight of the perceptron inside the largest invariant set. Also, the necessary and sufficient condition for the forward dynamics of the weight of the perceptron to be injective as well as the condition for the invariant set of the weight of the perceptron to be attractive is derived
    • …
    corecore