6,620 research outputs found

    Computational aspects of electromagnetic NDE phenomena

    Get PDF
    The development of theoretical models that characterize various physical phenomena is extremely crucial in all engineering disciplines. In nondestructive evaluation (NDE), theoretical models are used extensively to understand the physics of material/energy interaction, optimize experimental design parameters and solve the inverse problem of defect characterization. This dissertation describes methods for developing computational models for electromagnetic NDE applications. Two broad classes of issues that are addressed in this dissertation are related to (i) problem formulation and (ii) implementation of computers;The two main approaches for solving physical problems in NDE are the differential and integral equations. The relative advantages and disadvantages of the two approaches are illustrated and models are developed to simulate electromagnetic scattering from objects or inhomogeneities embedded in multilayered media which is applicable in many NDE problems. The low storage advantage of the differential approach and the finite solution domain feature of the integral approach are exploited. Hybrid techniques and other efficient modeling techniques are presented to minimize the storage requirements for both approaches;The second issue of computational models is the computational resources required for implementation. Implementations on conventional sequential computers, parallel architecture machines and more recent neural computers are presented. An example which requires the use of massive parallel computing is given where a probability of detection model is built for eddy current testing of 3D objects. The POD model based on the finite element formulation is implemented on an NCUBE parallel computer. The linear system of equations is solved using direct and iterative methods. The implementations are designed to minimize the interprocessor communication and optimize the number of simultaneous model runs to obtain a maximum effective speedup;Another form of parallel computing is the more recent neurocomputer which depends on building an artificial neural network composed of numerous simple neurons. Two classes of neural networks have been used to solve electromagnetic NDE inverse problems. The first approach depends on a direct solution of the governing integral equation and is done using a Hopfield type neural network. Design of the network structure and parameters is presented. The second approach depends on developing a mathematical transform between the input and output space of the problem. A multilayered perceptron type neural network is invoked for this implementation. The network is augmented to build an incremental learning network which is motivated by the dynamic and modular features of the human brain

    High-performance Kernel Machines with Implicit Distributed Optimization and Randomization

    Full text link
    In order to fully utilize "big data", it is often required to use "big models". Such models tend to grow with the complexity and size of the training data, and do not make strong parametric assumptions upfront on the nature of the underlying statistical dependencies. Kernel methods fit this need well, as they constitute a versatile and principled statistical methodology for solving a wide range of non-parametric modelling problems. However, their high computational costs (in storage and time) pose a significant barrier to their widespread adoption in big data applications. We propose an algorithmic framework and high-performance implementation for massive-scale training of kernel-based statistical models, based on combining two key technical ingredients: (i) distributed general purpose convex optimization, and (ii) the use of randomization to improve the scalability of kernel methods. Our approach is based on a block-splitting variant of the Alternating Directions Method of Multipliers, carefully reconfigured to handle very large random feature matrices, while exploiting hybrid parallelism typically found in modern clusters of multicore machines. Our implementation supports a variety of statistical learning tasks by enabling several loss functions, regularization schemes, kernels, and layers of randomized approximations for both dense and sparse datasets, in a highly extensible framework. We evaluate the ability of our framework to learn models on data from applications, and provide a comparison against existing sequential and parallel libraries.Comment: Work presented at MMDS 2014 (June 2014) and JSM 201
    • …
    corecore