215 research outputs found

    Parallel schemes for global interative zero-finding.

    Get PDF
    by Luk Wai Shing.Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.Includes bibliographical references (leaves 44-45).ABSTRACT --- p.iACKNOWLEDGMENTS --- p.iiChapter CHAPTER 1. --- INTRODUCTION --- p.1Chapter CHAPTER 2. --- DRAWBACKS OF CLASSICAL THEORY --- p.4Chapter 2.1 --- Review of Sequential Iterative Methods --- p.4Chapter 2.2 --- Visualization Techniques --- p.8Chapter 2.3 --- Review of Deflation --- p.10Chapter CHAPTER 3. --- THE IMPROVEMENT OF THE ABERTH METHOD --- p.11Chapter 3.1 --- The Durand-Kerner method and the Aberth method --- p.11Chapter 3.2 --- The generalized Aberth method --- p.13Chapter 3.3 --- The modified Aberth Method for multiple-zero --- p.13Chapter 3.4 --- Choosing the initial approximations --- p.15Chapter 3.5 --- Multiplicity estimation --- p.16Chapter CHAPTER 4. --- THE HIGHER-ORDER ITERATIVE METHODS --- p.18Chapter 4.1 --- Introduction --- p.18Chapter 4.2 --- Convergence analysis --- p.20Chapter 4.3 --- Numerical Results --- p.28Chapter CHAPTER 5. --- PARALLEL DEFLATION --- p.32Chapter 5.1 --- The Algorithm --- p.32Chapter 5.2 --- The Problem of Zero Component --- p.34Chapter 5.3 --- The Problem of Round-off Error --- p.35Chapter CHAPTER 6. --- HOMOTOPY ALGORITHM --- p.36Chapter 6.1 --- Introduction --- p.36Chapter 6.2 --- Choosing Q(z) --- p.37Chapter 6.3 --- The arclength continuation method --- p.38Chapter 6.4 --- The bifurcation problem --- p.40Chapter 6.5 --- The suggested improvement --- p.41Chapter CHAPTER 7. --- CONCLUSION --- p.42REFERENCES --- p.44APPENDIX A. PROGRAM LISTING --- p.A-lAPPENDIX B. COLOR PLATES --- p.B-

    The solution of transcendental equations

    Get PDF
    Some of the existing methods to globally approximate the roots of transcendental equations namely, Graeffe's method, are studied. Summation of the reciprocated roots, Whittaker-Bernoulli method, and the extension of Bernoulli's method via Koenig's theorem are presented. The Aitken's delta squared process is used to accelerate the convergence. Finally, the suitability of these methods is discussed in various cases

    Laguerre's method in global iterative zero-finding.

    Get PDF
    by Kwok, Wong-chuen Tony.Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.Includes bibliographical references (leaves [85-86]).AcknowledgementAbstractChapter I --- Laguerre's Method in Polynomial Zero-findingChapter 1 --- Background --- p.1Chapter 2 --- Introduction and Problems of Laguerre´ةs Method --- p.3Chapter 2.1 --- Laguerre´ةs Method in Symmetrie-Cluster ProblemChapter 2.2 --- Cyclic BehaviourChapter 2.3 --- Supercluster ProblemChapter 3 --- Proposed Enhancement to Laguerre 's Method --- p.9Chapter 3.1 --- Analysis of Adding a Zero or PoleChapter 3.2 --- Proposed AlgorithmChapter 4 --- Conclusion --- p.17Chapter II --- Homotopy Methods applied to Polynomial Zero-findingChapter 1 --- Introduction --- p.18Chapter 2 --- Overcoming Bifurcation --- p.22Chapter 3 --- Comparison of Homotopy Algorithms --- p.27Chapter 4 --- Conclusion --- p.29AppendicesChapter I --- Laguerre's Method in Polynomial Zero-findingChapter 0 --- Naming of Testing PolynomialsChapter 1 --- Finding All Zeros using Proposed Laguerre's MethodChapter 2 --- Experiments: Selected Pictures of Comparison of Proposed Strategy with Other StrategyChapter 3 --- Experiments: Tables of Comparison of Proposed Strategy with Other StrategyChapter 4 --- Distance Colorations and Target ColorationsChapter II --- Homotopy Methods applied to Polynomial Zero-findingChapter 1 --- Comparison of Algorithms using Homotopy MethodChapter 2 --- Experiments: Selected Pictorial ComparisonChapter III --- An Example Demonstrating Effect of Round-off Errors Reference

    Energy Based Multi-Model Fitting and Matching Problems

    Get PDF
    Feature matching and model fitting are fundamental problems in multi-view geometry. They are chicken-&-egg problems: if models are known it is easier to find matches and vice versa. Standard multi-view geometry techniques sequentially solve feature matching and model fitting as two independent problems after making fairly restrictive assumptions. For example, matching methods rely on strong discriminative power of feature descriptors, which fail for stereo images with repetitive textures or wide baseline. Also, model fitting methods assume given feature matches, which are not known a priori. Moreover, when data supports multiple models the fitting problem becomes challenging even with known matches and current methods commonly use heuristics. One of the main contributions of this thesis is a joint formulation of fitting and matching problems. We are first to introduce an objective function combining both matching and multi-model estimation. We also propose an approximation algorithm for the corresponding NP-hard optimization problem using block-coordinate descent with respect to matching and model fitting variables. For fixed models, our method uses min-cost-max-flow based algorithm to solve a generalization of a linear assignment problem with label cost (sparsity constraint). Fixed matching case reduces to multi-model fitting subproblem, which is interesting in its own right. In contrast to standard heuristic approaches, we introduce global objective functions for multi-model fitting using various forms of regularization (spatial smoothness and sparsity) and propose a graph-cut based optimization algorithm, PEaRL. Experimental results show that our proposed mathematical formulations and optimization algorithms improve the accuracy and robustness of model estimation over the state-of-the-art in computer vision

    Description of motor control using inverse models

    Get PDF
    Humans can perform complicated movements like writing or running without giving them much thought. The scientific understanding of principles guiding the generation of these movements is incomplete. How the nervous system ensures stability or compensates for injury and constraints – are among the unanswered questions today. Furthermore, only through movement can a human impose their will and interact with the world around them. Damage to a part of the motor control system can lower a person’s quality of life. Understanding how the central nervous system (CNS) forms control signals and executes them helps with the construction of devices and rehabilitation techniques. This allows the user, at least in part, to bypass the damaged area or replace its function, thereby improving their quality of life. CNS forms motor commands, for example a locomotor velocity or another movement task. These commands are thought to be processed through an internal model of the body to produce patterns of motor unit activity. An example of one such network in the spinal cord is a central pattern generator (CPG) that controls the rhythmic activation of synergistic muscle groups for overground locomotion. The descending drive from the brainstem and sensory feedback pathways initiate and modify the activity of the CPG. The interactions between its inputs and internal dynamics are still under debate in experimental and modelling studies. Even more complex neuromechanical mechanisms are responsible for some non-periodic voluntary movements. Most of the complexity stems from internalization of the body musculoskeletal (MS) system, which is comprised of hundreds of joints and muscles wrapping around each other in a sophisticated manner. Understanding their control signals requires a deep understanding of their dynamics and principles, both of which remain open problems. This dissertation is organized into three research chapters with a bottom-up investigation of motor control, plus an introduction and a discussion chapter. Each of the three research chapters are organized as stand-alone articles either published or in preparation for submission to peer-reviewed journals. Chapter two introduces a description of the MS kinematic variables of a human hand. In an effort to simulate human hand motor control, an algorithm was defined that approximated the moment arms and lengths of 33 musculotendon actuators spanning 18 degrees of freedom. The resulting model could be evaluated within 10 microseconds and required less than 100 KB of memory. The structure of the approximating functions embedded anatomical and functional features of the modelled muscles, providing a meaningful description of the system. The third chapter used the developments in musculotendon modelling to obtain muscle activity profiles controlling hand movements and postures. The agonist-antagonist coactivation mechanism was responsible for producing joint stability for most degrees of freedom, similar to experimental observations. Computed muscle excitations were used in an offline control of a myoelectric prosthesis for a single subject. To investigate the higher-order generation of control signals, the fourth chapter describes an analytical model of CPG. Its parameter space was investigated to produce forward locomotion when controlled with a desired speed. The model parameters were varied to produce asymmetric locomotion, and several control strategies were identified. Throughout the dissertation the balance between analytical, simulation, and phenomenological modelling for the description of simple and complex behavior is a recurrent theme of discussion

    Simulation-based optimal Bayesian experimental design for nonlinear systems

    Get PDF
    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics.Comment: Preprint 53 pages, 17 figures (54 small figures). v1 submitted to the Journal of Computational Physics on August 4, 2011; v2 submitted on August 12, 2012. v2 changes: (a) addition of Appendix B and Figure 17 to address the bias in the expected utility estimator; (b) minor language edits; v3 submitted on November 30, 2012. v3 changes: minor edit

    Solving regularized nonlinear least-squares problem in dual space with application to variational data assimilation

    Get PDF
    Cette thèse étudie la méthode du gradient conjugué et la méthode de Lanczos pour la résolution de problèmes aux moindres carrés non-linéaires sous déterminés et régularisés par un terme de pénalisation quadratique. Ces problèmes résultent souvent d'une approche du maximum de vraisemblance, et impliquent un ensemble de m observations physiques et n inconnues estimées par régression non linéaire. Nous supposons ici que n est grand par rapport à m. Un tel cas se présente lorsque des champs tridimensionnels sont estimés à partir d'observations physiques, par exemple dans l'assimilation de données appliquée aux modèles du système terrestre. Un algorithme largement utilisé dans ce contexte est la méthode de Gauss- Newton (GN), connue dans la communauté d'assimilation de données sous le nom d'assimilation variationnelle des données quadridimensionnelles. Le procédé GN repose sur la résolution approchée d'une séquence de moindres carrés linéaires optimale dans laquelle la fonction coût non-linéaire des moindres carrés est approximée par une fonction quadratique dans le voisinage de l'itération non linéaire en cours. Cependant, il est bien connu que cette simple variante de l'algorithme de Gauss-Newton ne garantit pas une diminution monotone de la fonction coût et sa convergence n'est donc pas garantie. Cette difficulté est généralement surmontée en utilisant une recherche linéaire (Dennis and Schnabel, 1983) ou une méthode de région de confiance (Conn, Gould and Toint, 2000), qui assure la convergence globale des points critiques du premier ordre sous des hypothèses faibles. Nous considérons la seconde de ces approches dans cette thèse. En outre, compte tenu de la grande échelle de ce problème, nous proposons ici d'utiliser un algorithme de région de confiance particulier s'appuyant sur la méthode du gradient conjugué tronqué de Steihaug-Toint pour la résolution approchée du sous-problème (Conn, Gould and Toint, 2000, p. 133-139) La résolution de ce sous-problème dans un espace à n dimensions (par CG ou Lanczos) est considérée comme l'approche primale. Comme alternative, une réduction significative du coût de calcul est possible en réécrivant l'approximation quadratique dans l'espace à m dimensions associé aux observations. Ceci est important pour les applications à grande échelle telles que celles quotidiennement traitées dans les systèmes de prévisions météorologiques. Cette approche, qui effectue la minimisation de l'espace à m dimensions à l'aide CG ou de ces variantes, est considérée comme l'approche duale. La première approche proposée (Da Silva et al., 1995; Cohn et al., 1998; Courtier, 1997), connue sous le nom de Système d'analyse Statistique de l'espace Physique (PSAS) dans la communauté d'assimilation de données, commence par la minimisation de la fonction de coût duale dans l'espace de dimension m par un CG préconditionné (PCG), puis revient l'espace à n dimensions. Techniquement, l'algorithme se compose de formules de récurrence impliquant des vecteurs de taille m au lieu de vecteurs de taille n. Cependant, l'utilisation de PSAS peut être excessivement coûteuse car il a été remarqué que la fonction de coût linéaire des moindres carrés ne diminue pas monotonement au cours des itérations non-linéaires. Une autre approche duale, connue sous le nom de méthode du gradient conjugué préconditionné restreint (RPCG), a été proposée par Gratton and Tshimanga (2009). Celle-ci génère les mêmes itérations en arithmétique exacte que l'approche primale, à nouveau en utilisant la formule de récurrence impliquant des vecteurs taille m. L'intérêt principal de RPCG est qu'il en résulte une réduction significative de la mémoire utilisée et des coûts de calcul tout en conservant la propriété de convergence souhaitée, contrairement à l'algorithme PSAS. La relation entre ces deux approches duales et la dérivation de préconditionneurs efficaces (Gratton, Sartenaer and Tshimanga, 2011), essentiels pour les problèmes à grande échelle, n'ont pas été abordées par Gratton and Tshimanga (2009). La motivation principale de cette thèse est de répondre à ces questions. En particulier, nous nous intéressons à la conception de techniques de préconditionnement et à une généralisation des régions de confiance qui maintiennent la correspondance une-à-une entre itérations primales et duales, opérant ainsi un calcul éfficace avec un algorithme globalement convergent. ABSTRACT : This thesis investigates the conjugate-gradient method and the Lanczos method for the solution of under-determined nonlinear least-squares problems regularized by a quadratic penalty term. Such problems often result from a maximum likelihood approach, and involve a set of m physical observations and n unknowns that are estimated by nonlinear regression. We suppose here that n is large compared to m. These problems are encountered for instance when three-dimensional fields are estimated from physical observations, as is the case in data assimilation in Earth system models. A widely used algorithm in this context is the Gauss-Newton (GN) method, known in the data assimilation community under the name of incremental four dimensional variational data assimilation. The GN method relies on the approximate solution of a sequence of linear least-squares problems in which the nonlinear least-squares cost function is approximated by a quadratic function in the neighbourhood of the current nonlinear iterate. However, it is well known that this simple variant of the Gauss-Newton algorithm does not ensure a monotonic decrease of the cost function and that convergence is not guaranteed. Removing this difficulty is typically achieved by using a line-search (Dennis and Schnabel, 1983) or trust-region (Conn, Gould and Toint, 2000) strategy, which ensures global convergence to first order critical points under mild assumptions. We consider the second of these approaches in this thesis. Moreover, taking into consideration the large-scale nature of the problem, we propose here to use a particular trust-region algorithm relying on the Steihaug-Toint truncated conjugate-gradient method for the approximate solution of the subproblem (Conn, Gould and Toint, 2000, pp. 133-139). Solving this subproblem in the n-dimensional space (by CG or Lanczos) is referred to as the primal approach. Alternatively, a significant reduction in the computational cost is possible by rewriting the quadratic approximation in the m-dimensional space associated with the observations. This is important for large-scale applications such as those solved daily in weather prediction systems. This approach, which performs the minimization in the m-dimensional space using CG or variants thereof, is referred to as the dual approach. The first proposed dual approach (Courtier, 1997), known as the Physical-space Statistical Analysis System (PSAS) in the data assimilation community starts by solving the corresponding dual cost function in m-dimensional space by a standard preconditioned CG (PCG), and then recovers the step in n-dimensional space through multiplication by an n by m matrix. Technically, the algorithm consists of recurrence formulas involving m-vectors instead of n-vectors. However, the use of PSAS can be unduly costly as it was noticed that the linear least-squares cost function does not monotonically decrease along the nonlinear iterations when applying standard termination. Another dual approach has been proposed by Gratton and Tshimanga (2009) and is known as the Restricted Preconditioned Conjugate Gradient (RPCG) method. It generates the same iterates in exact arithmetic as those generated by the primal approach, again using recursion formula involving m-vectors. The main interest of RPCG is that it results in significant reduction of both memory and computational costs while maintaining the desired convergence property, in contrast with the PSAS algorithm. The relation between these two dual approaches and the question of deriving efficient preconditioners (Gratton, Sartenaer and Tshimanga, 2011), essential when large-scale problems are considered, was not addressed in Gratton and Tshimanga (2009). The main motivation for this thesis is to address these open issues. In particular, we are interested in designing preconditioning techniques and a trust-region globalization which maintains the one-to-one correspondance between primal and dual iterates, thereby offering a cost-effective computation in a globally convergent algorithm
    • …
    corecore