14 research outputs found

    Assessment of Linear Inverse Problems in Magnetocardiography and Lorentz Force Eddy Current Testing

    Get PDF
    Lineare inverse Probleme tauchen in vielen Bereichen von Wissenschaft und Technik auf. Effiziente Lösungsstrategien für diese inversen Probleme erfordern Informationen darüber, ob das Problem schlecht-gestellt und in welchem Ausmaß dies der Fall ist. In der vorliegenden Dissertation wird eine umfassende theoretische Analyse existierender Bewertungsmaße durchgeführt. Aus diesen Untersuchungen werden schließlich zwei neue Bewertungsmaße abgeleitet. Beide können bei einer Vielzahl linearer inverser Probleme angewendet werden, einschließlich biomedizinische Anwendungen oder der zerstörungsfreien Materialprüfung. Die theoretischen Betrachtungen zur Behandlung linearer inverser Probleme werden auf zwei Beispiele angewendet. Das erste ist die Magnetkardiographie, wo die Optimierung magnetischer Sensoren in einem westenähnlichen Sensorfeld untersucht wird. Für die Messungen der magnetischen Flussdichte werden üblicherweise monoaxiale Sensoren in einem Feld perfekt parallel angeordnet. Eine zufällige Variation ihrer Ausrichtungen kann die Kondition des entsprechenden linearen inversen Problems verbessern. Eine theoretische Definition des Falls, in dem zufällige Variationen monoaxialer Sensoren den Zustand der Kernmatrix mit einer Wahrscheinlichkeit gleich Eins verbessern wird ebenfalls in der Dissertation vorgestellt. Diese theoretische Beobachtung ist allgemein gültig.Positionen und Orientierungen der Magnetsensoren rund um den Oberkörper wurden mit drei aus der Literatur bekannten Bewertungsmaßen und einem neu in dieser Arbeit vorgeschlagenen Maß optimiert. Die besten Ergebnisse ergeben sich bei einer unregelmäßigen Verteilung der Sensoren auf der Oberfläche des Brustkorbes. Im Vergleich zu früheren Untersuchungsergebnissen kann daraus geschlussfolgert werden, dass mit geringfügig abweichenden Sensoranordnungen ebenso gute Ergebnisse erzielt werden können. Ein zweites Anwendungsbeispiel ist ein Verfahren der zerstörungsfreien Materialprüfung, das auch als Lorentzkraft-Wirbelstromprüfung bekannt geworden ist. In dieser Arbeit wird eine neue Methode für die kontaktlose, zerstörungsfreie Untersuchung leitfähiger Materialien vorgestellt. Dabei wird die Lorentzkraft gemessen, die auf einen Dauermagneten wirkt, der relativ zu einem Testkörper bewegt. Es wird eine neue Approximationsmethode für die Berechnung der magnetischen Felder und der Lorentzkräfte vorgeschlagen.Linear inverse problems arise throughout a variety of branches of science and engineering. Efficient solution strategies for these inverse problems need to know whether a problem is ill-conditioned as well as its degree of ill-conditioning. In this thesis, a comprehensive theoretical analysis of known figures of merit has been done and finally two new figures of merit have been developed. Both can be applied in a large variety of linear inverse problems, including biomedical applications and nondestructive testing of materials. Theoretical considerations of the conditioning of linear inverse problems are applied to two examples. The first one is magnetocardiography, where the optimization of magnetic sensors in a vest-like sensor array has been considered. When measuring magnetic flux density, usually mono-axial magnetic sensors are arranged in an array, perfectly in parallel. It has been shown that a random variation of their orientations can improve the condition of the corresponding linear inverse problem. Thus, in this thesis a theoretical definition of the case when random variations of mono-axial sensors orientations improve the condition of the kernel matrix with a probability equal to one is presented. This theoretical observation is valid in general. Positions and orientations of magnetic sensors around the torso have been optimized minimizing three figures of merit given in the literature and a novel one presented in the thesis. Best results have been found for non-uniform sensors distribution on the whole torso surface. In comparison to previous findings can be concluded that quite different sensor sets can perform equally well.The second application example is nondestructive testing of materials named Lorentz force eddy current testing, where the Lorentz force exerting on a permanent magnet, which is moving relative to the specimen, is determined. A novel approximation method for the calculation of the magnetic fields and Lorentz forces is proposed. Based on the new approximation method, a new inverse procedure for defect reconstruction is proposed. A successful reconstruction using data from finite elements method analysis and measurements is obtained

    Preconditioners for Generalized Saddle-Point Problems

    Get PDF
    Generalized saddle point problems arise in a number of applications, ranging from optimization and metal deformation to fluid flow and PDE-governed optimal control. We focus our discussion on the most general case, making no assumption of symmetry or definiteness in the matrix or its blocks. As these problems are often large and sparse, preconditioners play a critical role in speeding the convergence of Krylov methods for these problems. We first examine two types of preconditioners for these problems, one block-diagonal and one indefinite, and present analyses of the eigenvalue distributions of the preconditioned matrices. We also investigate the use of approximations for the Schur complement matrix in these preconditioners and develop eigenvalue analysis accordingly. Second, we examine new developments in probing methods, inspired by graph coloring methods for sparse Jacobians, for building approximations to Schur complement matrices. We then present an analysis of these techniques and their accuracy. In addition, we provide a mathematical justification for their use in approximating Schur complements and suggest the use of approximate factorization techniques to decrease the computational cost of applying the inverse of the probed matrix. Finally, we consider the effect of our preconditioners on four applications. Two of these applications come from the realm of fluid flow, one using a finite element discretization and the other using a spectral discretization. The third application involves the stress relaxation of aluminum strips at low stress levels. The final application involves mesh parameterization and flattening. For these applications, we present results illustrating the eigenvalue bounds on our preconditioners and demonstrating the theoretical justification of these methods. We also present convergence and timing results, showing the effectiveness of our methods in practice. Specifically the use of probing methods for approximating the Schur compliment matrices in our preconditioners is empirically justified. We also investigate the hh-dependence of our preconditioners one model fluid problem, and demonstrate empirically that our methods do not suffer from a deterioration in convergence as the problem size increases

    Efficient Recapitalization

    Get PDF
    We analyze government interventions to recapitalize a banking sector that restricts lending to firms because of debt overhang. We find that the efficient recapitalization program injects capital against preferred stock plus warrants and conditions imple-mentation on sufficient bank participation. Preferred stock plus warrants reduces opportunistic participation by banks that do not require recapitalization, although conditional implementation limits free riding by banks that benefit from lower credit risk because of other banks ’ participation. Efficient recapitalization is profitable if the benefits of lower aggregate credit risk exceed the cost of implicit transfers to bank debt holders. FIRMS INVEST TOO LITTLE if they are financed with too much debt. The reason is that the cash flow generated by new investments accrues to existing debt holders if the firm goes bankrupt. As a result, new investments can increase a firm’s debt value while reducing its equity value. A firm that maximizes equity value may therefore forgo new investment opportunities, with the extent o

    Data Structures and Algorithms for Efficient Solution of Simultaneous Linear Equations from 3-D Ice Sheet Models

    Get PDF
    Two current software packages for solving large systems of sparse simultaneous l~neare equations are evaluated in terms of their applicability to solving systems of equations generated by the University of Maine Ice Sheet Model. SuperLU, the first package, has been developed by researchers at the University of California at Berkeley and the Lawrence Berkeley National Laboratory. UMFPACK, the second package, has been developed by T. A. Davis of the University of Florida who has ties with the U. C. Berkeley researchers as well as European researchers. Both packages are direct solvers that use LU factorization with forward and backward substitution. The University of Maine Ice Sheet Model uses the finite element method to solve partial differential equations that describe ice thickness, velocity,and temperature throughout glaciers as functions of position and t~me. The finite element method generates systems of linear equations having tens of thousands of variables and one hundred or so non-zero coefficients per equation. Matrices representing these systems of equations may be strictly banded or banded with right and lower borders. In order to efficiently Interface the software packages with the ice sheet model, a modified compressed column data structure and supporting routines were designed and written. The data structure interfaces directly with both software packages and allows the ice sheet model to access matrix coefficients by row and column number in roughly 100 nanoseconds while only storing non-zero entries of the matrix. No a priori knowledge of the matrix\u27s sparsity pattern is required. Both software packages were tested with matrices produced by the model and performance characteristics were measured arid compared with banded Gaussian elimination. When combined with high performance basic linear algebra subprograms (BLAS), the packages are as much as 5 to 7 times faster than banded Gaussian elimination. The BLAS produced by K. Goto of the University of Texas was used. Memory usage by the packages varted from slightly more than banded Gaussian elimination with UMFPACK, to as much as a 40% savings with SuperLU. In addition, the packages provide componentwise backward error measures and estimates of the matrix\u27s condition number. SuperLU is available for parallel computers as well as single processor computers. UMPACK is only for single processor computers. Both packages are also capable of efficiently solving the bordered matrix problem

    Sensitivity Evaluation in Aerodynamic Optimal Design

    Get PDF
    The possibility to compute first- and second-derivatives of functionals subject to equality constraints given by state equations (and in particular non-linear systems of Partial Derivative Equations) allows us to use efficient techniques to solve several industrial-strength problems. Among possible applications that require knowledge of the derivatives, let us mention: aerodynamic shape optimization with gradient-based descent algorithms, propagation of uncertainties using perturbation techniques, robust optimization, and improvement of the accuracy of a functionnal using the adjoint state. In this work, we develop and analyze several strategies to evaluate the first- and second-derivatives of constrained functionals, using techniques based on Automatic Differentiation. Furthermore, we propose a descent algorithm for aerodynamic shape optimization, that is based on techniques of multi-level gradient, and which can be applied to different kinds of parameterization

    High performance Cholesky and symmetric indefinite factorizations with applications

    Get PDF
    The process of factorizing a symmetric matrix using the Cholesky (LLT ) or indefinite (LDLT ) factorization of A allows the efficient solution of systems Ax = b when A is symmetric. This thesis describes the development of new serial and parallel techniques for this problem and demonstrates them in the setting of interior point methods. In serial, the effects of various scalings are reported, and a fast and robust mixed precision sparse solver is developed. In parallel, DAG-driven dense and sparse factorizations are developed for the positive definite case. These achieve performance comparable with other world-leading implementations using a novel algorithm in the same family as those given by Buttari et al. for the dense problem. Performance of these techniques in the context of an interior point method is assessed

    Reducing synchronization in distributed parallel programs

    Get PDF
    Developers of scalable libraries and applications for distributed-memory parallel systems face many challenges to attaining high performance. These challenges include communication latency, critical path delay, suboptimal scheduling, load imbalance, and system noise. These challenges are often defined and measured relative to points of broad synchronization in the program’s execution. Given the way in which many algorithms are defined and systems are implemented, gauging the above challenges at synchronization points is not unreasonable. In this thesis, I attempt to demonstrate that in many cases, those synchronization points are themselves the core issue behind these challenges. In some cases, the synchronizing operations cause a program to incur the costs from these challenges. In other cases, the presence of synchronization potentially exacerbates these problems. Through a simple performance model, I demonstrate that making synchronization less frequent can greatly mitigate performance issues. My work and several results in the literature show that many motifs and whole applications can be successfully redesigned to operate with asymptotically less synchronization than their naïve starting points. In exploring these issues, I have identified recurrent patterns across many applications and multiple environments that can guide future efforts more directly toward synchronization-avoiding designs. Thus, I attempt to offer developers the beginnings of a high-level play-book to follow rather than having to rediscover application-specific instances of the patterns

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF

    Measuring farmers’ risk and uncertainty attitudes: an interval prospect experiment

    Get PDF
    Attitudes to risk have generated a lot of attention over the years due to its vital importance in decision-making processes that are necessary for life and livelihoods. Attitudes towards uncertainty have received less attention even though arguably most important decisions are under uncertainty rather than risk. In addition, many studies modelling attitudes to risk have adopted experiments that place significant cognitive burden on respondents. Crucially, they are also framed in a way that do not reflect everyday problems. Specifically, the most common way of eliciting attitudes is to ask decision makers to choose between discrete monetary lotteries with known probabilities attached to the payoffs. Yet, arguably, the vast majority of choices that people make in their day-to-day lives are with respect to continuous non-monetary outcomes. To address these gaps, this thesis investigates responses to continuous ‘prospects’ across different conditions (risk & uncertainty), contexts (monetary & time) and content domains (gain, loss & mixed). Further, this thesis examines the link between attitudes to risk/uncertainty and mental health related factors and the effect of attitudes to risk and uncertainty on farmers’ decisions both for themselves and for others. This thesis uses both non-parametric methods - relating to the patterns that characterise participants’ choices and their determinants; and parametric models – based upon cumulative prospect theory (CPT) as it extends to continuous prospects. The data were gathered using lab-in-field experiments in which Nigerian farmer’s chose between pairs of prospects with continuous distributions, which were not exclusively monetary in nature. Attitudes towards risk, as opposed to uncertainty were elicited by specifying that all outcomes over the specified interval were ‘equally likely’ (thus a uniform probability density). Uncertainty was specified by indicating to farmers that one outcome within the specified interval would be realised but without the specification of an associated probability density. Key findings are that attitudes differ under different conditions, contexts and content domains. Using continuous prospects, respondents did not treat equally likely outcomes as ‘equally likely’ and appear to demonstrate cumulative probability distribution warping consistent with the CPT. However, there were behaviours that are difficult to reconcile with CPT such as the preferences of many respondents could only be modelled using “extreme curvature” of the value function. This was induced by what we term negligible gain avoidance (i.e. avoiding prospects with zero lower bound in the gain domain) or negligible loss seeking (i.e. preferring prospects with zero upper bound in the loss domain) behaviours. CPT, Salience theory, Heuristics and other theories examined in this study could not alone explain these behaviours. Results from investigating the effect of bipolar disorder tendencies (BD) on risk attitudes show that BD significantly affects the shape of the value and probability weighting functions; and farmers that have BD are more likely to make random choices. Other results show that risk aversion for losses increases participation in off-farm income generating activities; and that farmers’ likelihood to engage in specific types of offfarm activities is determined by their risk and uncertainty attitudes
    corecore