9 research outputs found

    A family of root-finding methods with accelerated convergence

    Get PDF
    AbstractA parametric family of iterative methods for the simultaneous determination of simple complex zeros of a polynomial is considered. The convergence of the basic method of the fourth order is accelerated using Newton's and Halley's corrections thus generating total-step methods of orders five and six. Further improvements are obtained by applying the Gauss-Seidel approach. Accelerated convergence of all proposed methods is attained at the cost of a negligible number of additional operations. Detailed convergence analysis and two numerical examples are given

    The computation of multiple roots of a polynomial using structure preserving matrix methods.

    Get PDF
    Solving polynomial equations is a fundamental problem in several engineering and science fields. This problem has been handled by several researchers and excellent algorithms have been proposed for solving this problem. The computation of the roots of ill-conditioned polynomials is, however, still drawing the attention of several researchers. In particular, a small round off error due to floating point arithmetic is sufficient to break up a multiple root of a polynomial into a cluster of simple closely spaced roots. The problem becomes more complicated if the neighbouring roots are closely spaced. This thesis develops a root finder to compute multiple roots of an inexact polynomial whose coefficients are corrupted by noise. The theoretical development of the developed root solver involves the use of structured matrix methods, optimising parameters using linear programming, and solving least squares equality and nonlinear least squares problems. The developed root solver differs from the classical methods, because it first computes the multiplicities of the roots, after which the roots are computed. The experimental results show that the developed root solver gives very good results without the need for prior knowledge about the noise level imposed on the coefficients of the polynomial

    Structured matrix methods for a polynomial root solver using approximate greatest common divisor computations and approximate polynomial factorisations.

    Get PDF
    This thesis discusses the use of structure preserving matrix methods for the numerical approximation of all the zeros of a univariate polynomial in the presence of noise. In particular, a robust polynomial root solver is developed for the calculation of the multiple roots and their multiplicities, such that the knowledge of the noise level is not required. This designed root solver involves repeated approximate greatest common divisor computations and polynomial divisions, both of which are ill-posed computations. A detailed description of the implementation of this root solver is presented as the main work of this thesis. Moreover, the root solver, implemented in MATLAB using 32-bit floating point arithmetic, can be used to solve non-trivial polynomials with a great degree of accuracy in numerical examples

    Guidance and control using model predictive control for low altitude real-time terrain following flight

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2004.Includes bibliographical references (p. 123-125).This thesis presents the design and implementation of a model predictive control based trajectory optimization method for Nap-of-the-Earth (NOE) flight. A NOE trajectory reference is generated over a subspace of the terrain. It is then inserted into the cost function and the resulting trajectory tracking error term is weighted for more precise longitudinal tracking than lateral tracking through the introduction of the TF/TA ratio. The TF/TA ratio, control effort penalties and MPC prediction horizon are tuned for this application via simulation and eigenvalue analysis for stability and performance. Steps are taken to reduce complexity in the optimization problem including perturbational linearization in the prediction model generation and the use of control basis functions which are analyzed for their trade-off between approximation of the optimal cost/solution and reduction of the optimization complexity. Obstacle avoidance including preclusion of ground collision is accomplished through the establishment of hard state constraints. These state constraints create a 'safe envelope' within which the optimal trajectory can be found. Results over a variety of sample terrains are provided to investigate the sensitivity of tracking performance to nominal velocities. The mission objective of low altitude and high speed was met satisfactorily without terrain or obstacle collision, however, methods to preclude or deal with infeasibility must be investigated as terrain severity (measured by commanded flight path angle) is increased past 30 degrees or speed is increased to and past 30 knots.by Tiffany Rae Lapp.S.M

    Computation of the one-dimensional unwrapped phase

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 101-102). "Cepstrum bibliography" (p. 67-100).In this thesis, the computation of the unwrapped phase of the discrete-time Fourier transform (DTFT) of a one-dimensional finite-length signal is explored. The phase of the DTFT is not unique, and may contain integer multiple of 27r discontinuities. The unwrapped phase is the instance of the phase function chosen to ensure continuity. This thesis presents existing algorithms for computing the unwrapped phase, discussing their weaknesses and strengths. Then two composite algorithms are proposed that use the existing ones, combining their strengths while avoiding their weaknesses. The core of the proposed methods is based on recent advances in polynomial factoring. The proposed methods are implemented and compared to the existing ones.by Zahi Nadim Karam.S.M

    Methods for the Investigation of Spatial Clustering, With Epidemiological Applications

    Get PDF
    When analysing spatial data, it is often of interest to investigate whether or not the events under consideration show any tendency to form small aggregations, or clusters, that are unlikely to be the result of random variation. For example, the events might be the coordinates of the address at diagnosis of cases of a malignant disease, such as acute lymphoblastic leukaemia or non-Hodgkin's lymphoma. This thesis considers the usefulness of methods employing nonparametric kernel density estimation for the detection of clustering, as defined above, so that specific, and sometimes limiting, alternative hypotheses are not required, and the continuous spatial context of the problem is maintained. Two approaches, in particular, are considered; first, a generalisation of the Scan Statistic to two dimensions, with a correction for spatial heterogeneity under the null hypothesis, and secondly, a statistic measuring the squared difference between kernel estimates of the probability density functions of the principal events and a sample of controls. Chapter 1 establishes the background for this work, and identifies four different families of techniques that have been proposed, previously, for the study of clustering. Problems inherent in typical applications are discussed, and then used to motivate the approach taken subsequently. Chapter 2 describes the Scan Statistic for a one-dimensional problem, assuming that the distribution of events under the null hypothesis is uniform. A number of approximations to the statistic's distribution and methods of calculating critical values are compared, to enable significance testing to be carried out with minimum effort. A statistic based on the supremum of a kernel density estimate is also suggested, but an empirical study demonstrates that this has lower power than the Scan Statistic. Chapter 3 generalises the Scan Statistic to two dimensions and demonstrates empirically that existing bounds for the upper tail probability are not sufficiently sharp for significance testing purposes. As an aside, the chapter also describes a problem that can occur when a single pseudo-random number generator is used to produce parallel streams of uniform deviates. Chapter 4 investigates a method, suggested by Weinstock (1981), of correcting for a known, non-uniform null distribution when using the Scan Statistic in one dimension, and proposes that a kernel estimator replace the exact density, the estimate being calculated from a second set of (control) observations. The approach is generalised to two dimensions, and approximations are developed to simplify the computation required. However, simulation results indicate that the accuracy of these approximations is often poor, so an alternative implementation is suggested. For the case where two samples of observations are available, the events of interest and a group of control locations. Chapter 5 suggests the use of the integrated squared difference between the corresponding kernel density estimates as a measure of the departure of the events from null expectation. By exploiting its similarity to the integrated square error of a k.d.e., the statistic is shown to be asymptotically normal; the proof generalises a central limit theorem of Hall (1984) to the two-sample case. However, simulation results suggest that significance testing should use the bootstrap, since the exact distribution of the statistic appears to be noticeably skewed. A modified statistic, with the smoothing parameters of the two k.d.e.'s constrained to be equal and non-random, is also discussed, and shown, both asymptotically and empirically, to have greater power than the original. In Chapter 6, the two techniques are applied to the geographical distribution of cases of laryngeal cancer in South Lancashire for the period 1974 to 1983. The results are similar, for the most part, to a previous analysis of the data, described by Diggle (1990) and Diggle et al (1990). The differences in the two analyses appear to be attributable to the bias or variability of the k.d.e.'s required to calculate the integrated squared difference statistic, and the inaccuracy of the approximations used by the corrected Scan Statistic. Chapter 7 summarises the results obtained in the preceding sections, and considers the implications for further research of the observations made in Chapter 6 regarding the weaknesses of the two statistics. It also suggests extensions to the basic methodology presented here that would increase the range of problems to which the two methods could be applied

    Evaluation of EEG-based depth of anaesthesia monitoring

    Get PDF
    In 2001 a University of Bristol team patented a novel data reduction method of the EEG for characterising categorical changes in consciousness. After pre-whitening the EEG signal with Gaussian white noise a parametric spectral estimation technique was applied. Two frequency domain indices were then proposed: the relative power found between 8Hz to 12Hz and 0.5Hz to 32Hz termed the 'alpha index', and the relative power between 0.5Hz to 4Hz and 0.5Hz to 32Hz termed the 'delta index'. The research and development of a precision EEG monitoring device designed to embody the novel algorithm is described in this thesis. The efficacy of the technique was evaluated using simulated and real EEG data recorded during Propofol anaesthesia. The simulated data showed improvements could be made to the patented method. Real EEG data collected whilst patients were wakeful and data from patients unresponsive to noxious stimuli were cleaned of obvious artefacts and analysed using the proposed algorithm. A Bayesian diagnostic test showed the alpha index had 65% sensitivity and selectivity to patient state. The delta index showed 72% sensitivity and selectivity. Taking a pragmatic approach, the literature is reviewed in this thesis to evaluate the use of EEG in depth of anaesthesia monitoring. Pertinent aspects of the sciences are profiled to identify physiological links to the characteristics of the EEG signal. Methods of data reduction are also reviewed to identify useful features and possible sources of error. In conclusion it is shown that the proposed indices do not provide a robust measure of depth of anaesthesia. An approach for further research is proposed based on the review work.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Acoustic Waves

    Get PDF
    The concept of acoustic wave is a pervasive one, which emerges in any type of medium, from solids to plasmas, at length and time scales ranging from sub-micrometric layers in microdevices to seismic waves in the Sun's interior. This book presents several aspects of the active research ongoing in this field. Theoretical efforts are leading to a deeper understanding of phenomena, also in complicated environments like the solar surface boundary. Acoustic waves are a flexible probe to investigate the properties of very different systems, from thin inorganic layers to ripening cheese to biological systems. Acoustic waves are also a tool to manipulate matter, from the delicate evaporation of biomolecules to be analysed, to the phase transitions induced by intense shock waves. And a whole class of widespread microdevices, including filters and sensors, is based on the behaviour of acoustic waves propagating in thin layers. The search for better performances is driving to new materials for these devices, and to more refined tools for their analysis

    Function theoretic methods in partial differential equations

    Get PDF
    corecore