318 research outputs found

    Full Page Ads

    Get PDF

    Relative conditioning of linear systems of ODEs with respect to perturbation in the matrix of the system and in the initial value

    Get PDF
    The thesis is about how perturbations in the initial value y0y_{0} or in the coefficient matrix AA propagate along the solutions of nn-dimensional linear ordinary differential equations (ODE) \begin{equation*} \left\{ \begin{array}{l} y^\prime(t) =Ay(t),\ t\geq 0,\\ y(0)=y_0, \end{array} \right. \end{equation*} where A∈Rn×nA \in \mathbb{R}^{n\times n} and y0∈Rny_0\in \mathbb{R}^n and y(t)=etAy0y(t)=e^{tA}y_0 is the solution of the equation.\\ We begin by considering a perturbation analysis when the initial value y0y_0 is perturbed to y~0\tilde{y}_0 with relative error \varepsilon=\frac{\norm{\tilde{y}_0-y_0}}{\norm{y_0}}, where \norm{\cdot} is a vector norm on Rn\mathbb{R}^n. Due to perturbation in the initial value, the solution y(t)=etAy0y(t)=e^{tA}y_0 is perturbed to y~(t)=etAy~0\tilde{y}(t)=e^{tA}\tilde{y}_0 with relative error δ(t)=∥etAy~0−etAy0∥∥etAy0∥. \delta(t)=\frac{\left\| e^{tA}\tilde{y}_0-e^{tA}y_0\right\|}{\left\| e^{tA}y_0\right\|}. In other words, it is the (relative) conditioning of the problem \begin{equation*} y_0\mapsto e^{tA}y_0. \end{equation*} The relation between the error ε\varepsilon and the error δ(t)\delta(t) is described by three condition numbers namely: the condition number with the direction of perturbation, the condition number independent of the direction of perturbation and the condition number not only independent of the specific direction of perturbation but also independent of the specific initial value. How these condition numbers behave over a long period of time is an important aspect of the study. In the thesis, we move towards perturbations in the matrix as well as componentwise relative errors, rather than normwise relative errors, for perturbations of the initial value. About the first topic of the thesis, we look over how perturbations propagate along the solution of the ODE, when it is the coefficient matrix AA rather than the initial value that perturbs. In other words, the interest is to study the conditioning of the problem A↦etAy0. A\mapsto e^{tA}y_0. In case when the matrix AA perturbs to A~\tilde{A}, the relative error is given by \epsilon=\frac{\vertiii{\tilde{A}-A}}{\vertiii{A}} and the relative error in the solution of the ODE is given by ξ(t)=∥etA~y0−etAy0∥∥etAy0∥. \xi(t)=\frac{\left\| e^{t\widetilde{A}}y_0-e^{tA}y_0\right\|}{\left\| e^{tA}y_0\right\|}. We introduce three condition numbers as before. The analysis of the condition numbers is done for a normal matrix AA and by making use of 22-norm. We give very useful upper and lower bounds on these three condition numbers and we study their asymptotic behavior as time goes to infinity. There could be cases when someone is interested in the relative errors δl(t)=∣y~l(t)−yl(t)∣∣yl(t)∣,l=1,…,n, \delta_l(t)=\frac{\vert \tilde{y}_l(t)-y_l(t) \vert}{\vert y_l(t) \vert}, \quad l=1,\dots,n, of the perturbed solution components. With the motivation that componentwise relative errors give more information than the normwise relative error, we make a componentwise relative error analysis, which is the other topic of this thesis. We consider perturbations in initial value y0y_0 with normwise relative error ε\varepsilon and the relative error in the components of the solution of the equation given by δl(t)\delta_l(t). The interest is to study, for the ll-th component, the conditioning of the problem y0↦yl(t)=elTetAy0, y_0\mapsto y_l(t)=e_l^Te^{tA}y_0, where elTe_l^T is the ll-th vector of the canonical basis of Rn\mathbb{R}^n. We make this analysis for a diagonalizable matrix AA, diagonalizability being a generic situation for the matrix AA. We give two condition numbers in this part of the thesis and study their asymptotic behavior as time goes to infinity.The thesis is about how perturbations in the initial value y0y_{0} or in the coefficient matrix AA propagate along the solutions of nn-dimensional linear ordinary differential equations (ODE) \begin{equation*} \left\{ \begin{array}{l} y^\prime(t) =Ay(t),\ t\geq 0,\\ y(0)=y_0, \end{array} \right. \end{equation*} where A∈Rn×nA \in \mathbb{R}^{n\times n} and y0∈Rny_0\in \mathbb{R}^n and y(t)=etAy0y(t)=e^{tA}y_0 is the solution of the equation.\\ We begin by considering a perturbation analysis when the initial value y0y_0 is perturbed to y~0\tilde{y}_0 with relative error \varepsilon=\frac{\norm{\tilde{y}_0-y_0}}{\norm{y_0}}, where \norm{\cdot} is a vector norm on Rn\mathbb{R}^n. Due to perturbation in the initial value, the solution y(t)=etAy0y(t)=e^{tA}y_0 is perturbed to y~(t)=etAy~0\tilde{y}(t)=e^{tA}\tilde{y}_0 with relative error δ(t)=∥etAy~0−etAy0∥∥etAy0∥. \delta(t)=\frac{\left\| e^{tA}\tilde{y}_0-e^{tA}y_0\right\|}{\left\| e^{tA}y_0\right\|}. In other words, it is the (relative) conditioning of the problem \begin{equation*} y_0\mapsto e^{tA}y_0. \end{equation*} The relation between the error ε\varepsilon and the error δ(t)\delta(t) is described by three condition numbers namely: the condition number with the direction of perturbation, the condition number independent of the direction of perturbation and the condition number not only independent of the specific direction of perturbation but also independent of the specific initial value. How these condition numbers behave over a long period of time is an important aspect of the study. In the thesis, we move towards perturbations in the matrix as well as componentwise relative errors, rather than normwise relative errors, for perturbations of the initial value. About the first topic of the thesis, we look over how perturbations propagate along the solution of the ODE, when it is the coefficient matrix AA rather than the initial value that perturbs. In other words, the interest is to study the conditioning of the problem A↦etAy0. A\mapsto e^{tA}y_0. In case when the matrix AA perturbs to A~\tilde{A}, the relative error is given by \epsilon=\frac{\vertiii{\tilde{A}-A}}{\vertiii{A}} and the relative error in the solution of the ODE is given by ξ(t)=∥etA~y0−etAy0∥∥etAy0∥. \xi(t)=\frac{\left\| e^{t\widetilde{A}}y_0-e^{tA}y_0\right\|}{\left\| e^{tA}y_0\right\|}. We introduce three condition numbers as before. The analysis of the condition numbers is done for a normal matrix AA and by making use of 22-norm. We give very useful upper and lower bounds on these three condition numbers and we study their asymptotic behavior as time goes to infinity. There could be cases when someone is interested in the relative errors δl(t)=∣y~l(t)−yl(t)∣∣yl(t)∣,l=1,…,n, \delta_l(t)=\frac{\vert \tilde{y}_l(t)-y_l(t) \vert}{\vert y_l(t) \vert}, \quad l=1,\dots,n, of the perturbed solution components. With the motivation that componentwise relative errors give more information than the normwise relative error, we make a componentwise relative error analysis, which is the other topic of this thesis. We consider perturbations in initial value y0y_0 with normwise relative error ε\varepsilon and the relative error in the components of the solution of the equation given by δl(t)\delta_l(t). The interest is to study, for the ll-th component, the conditioning of the problem y0↦yl(t)=elTetAy0, y_0\mapsto y_l(t)=e_l^Te^{tA}y_0, where elTe_l^T is the ll-th vector of the canonical basis of Rn\mathbb{R}^n. We make this analysis for a diagonalizable matrix AA, diagonalizability being a generic situation for the matrix AA. We give two condition numbers in this part of the thesis and study their asymptotic behavior as time goes to infinity

    Does mathematics look certain in the front, but fallible in the back?

    Get PDF
    In this paper we re-examine the implications of the differences between 'doing' and 'writing' science and mathematics, questioning whether the way that science and mathematics are presented in textbooks or research articles creates a misleading picture of these differences. We focus our discussion on mathematics, in particular on Reuben Hersh's formulation of the contrast in terms of Goffman's dramaturgical frontstage-backstage analogy and his claim that various myths about mathematics only fit with how mathematics is presented in the 'front', but not with how it is practised in the 'back'. By investigating examples of both the 'front' (graduate lectures in mathematical logic) and the 'back' (meetings between supervisor and doctoral students) we examine, first, whether the 'front' of mathematics presents a misleading picture of mathematics, and, second, whether the 'front' and 'back' of mathematics are so discrepant that mathematics really does look certain in the 'front', but fallible in the 'back'

    2D Grammar Extension of the CMP Mathematical Formulae On-line Recognition System

    Get PDF
    Projecte realitzat en col.laboració amb Czech Technical University in PragueIn the last years, the recognition of handwritten mathematical formulae has recieved an increasing amount of attention in pattern recognition research. However, the diversity of approaches to the problem and the lack of a commercially viable system indicate that there is still much research to be done in this area. In this thesis, I will describe the previous work on a system for on-line handwritten mathematical formulae recognition based on the structural construction paradigm and two-dimensional grammars. In general, this approach can be successfully used in the anaylysis of inputs composed of objects that exhibit rich structural relations. An important benefit of the structural construction is in not treating symbols segmentation and structural anaylsis as two separate processes which allows the system to perform segmentation in the context of the whole formula structure, helping to solve arising ambiguities more reliably. We explore the opening provided by the polynomial complexity parsing algorithm and extend the grammar by many new grammar production rules which made the system useful for formulae met in the real world. We propose several grammar extensions to support a wide range of real mathematical formulae, as well as new features implemented in the application. Our current approach can recognize functions, limits, derivatives, binomial coefficients, complex numbers and more
    • …
    corecore