14 research outputs found
Accuracy and speed in computing the Chebyshev collocation derivative
We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method
On the Gibbs phenomenon 1: Recovering exponential accuracy from the Fourier partial sum of a non-periodic analytic function
It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed
On the Gibbs phenomenon I: recovering exponential accuracy from the Fourier partial sum of a nonperiodic analytic function
AbstractIt is well known that the Fourier series of an analytic and periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm. It is also known that if the function is not periodic, the rate of convergence deteriorates; in particular, there is no convergence in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. In this paper we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed. The proof is a constructive one and makes use of the Gegenbauer polynomials Cλn(x). It consists of two steps. In the first step we show that the first m coefficients of the Gegenbauer expansion (based on Cλn(x), for 0⩽n⩽m) of any L2 function can be obtained, within exponential accuracy, provided that both λ and m are proportional to (but smaller than) N. In the second step we construct the Gegenbauer expansion based on Cλn, 0⩽n⩽m, from the coefficients found in the first step. We show that this series converges exponentially with N, provided that the original function is analytic (though nonperiodic). Thus we prove that the Gibbs phenomenon can be completely overcome
Rhythmogenic neuronal networks, pacemakers, and k-cores
Neuronal networks are controlled by a combination of the dynamics of
individual neurons and the connectivity of the network that links them
together. We study a minimal model of the preBotzinger complex, a small
neuronal network that controls the breathing rhythm of mammals through periodic
firing bursts. We show that the properties of a such a randomly connected
network of identical excitatory neurons are fundamentally different from those
of uniformly connected neuronal networks as described by mean-field theory. We
show that (i) the connectivity properties of the networks determines the
location of emergent pacemakers that trigger the firing bursts and (ii) that
the collective desensitization that terminates the firing bursts is determined
again by the network connectivity, through k-core clusters of neurons.Comment: 4+ pages, 4 figures, submitted to Phys. Rev. Let
Exploring Statistical and Population Aspects of Network Complexity
The characterization and the definition of the complexity of objects is an important but very difficult problem that attracted much interest in many different fields. In this paper we introduce a new measure, called network diversity score (NDS), which allows us to quantify structural properties of networks. We demonstrate numerically that our diversity score is capable of distinguishing ordered, random and complex networks from each other and, hence, allowing us to categorize networks with respect to their structural complexity. We study 16 additional network complexity measures and find that none of these measures has similar good categorization capabilities. In contrast to many other measures suggested so far aiming for a characterization of the structural complexity of networks, our score is different for a variety of reasons. First, our score is multiplicatively composed of four individual scores, each assessing different structural properties of a network. That means our composite score reflects the structural diversity of a network. Second, our score is defined for a population of networks instead of individual networks. We will show that this removes an unwanted ambiguity, inherently present in measures that are based on single networks. In order to apply our measure practically, we provide a statistical estimator for the diversity score, which is based on a finite number of samples
Application of Multipole Methods to Two Matrix Eigenproblems
For two different types of matrices, arrowhead matrices and rank-one perturbations of diagonal matrices, their eigenvalues are the roots of a function essentially of the form h(x) = P i q i =(x \Gamma x i ). It is shown that by using multipole methods to evaluate h(x) we can speed up the calculation of their eigenvalues. An improvement is seen for matrices as small as 70 \Theta 70. In addition, multipole methods can be used to efficiently multiply the matrix of eigenvectors by a vector. Key Words: Multipole Methods, Symmetric Eigenproblem, Rank-one Perturbation AMS Subject Classifications: 65F15, 65F, 31C20 1 Introduction Numerical evaluation of the function h(x) = P i q i =(x \Gamma x i ) is a task that occurs in several different situations. One example is computing the eigenvalues of arrowhead matrices. Arrowhead matrices are of the form A = ` D z z t ae ' ; (1) where D is an n \Theta n diagonal matrix, z a vector, and ae a scalar. Such matrices occur in the descrip..
Accuracy Enhancement for higher Derivatives using Chebyshev . . .
We study a new method in reducing the roundoff error in computing derivatives using Chebyshev collocation methods. By using a grid mapping derived by Kosloff and Tal-Ezer, and the proper choice of the parameter �, the roundoff error of the k-th derivative can be reduced from O(N^2k) to O��Njln �j � k �, where � is the machine precision and N is the number of collocation points. This drastic reduction of roundoff error makes mapped Chebyshev methods competitive with any other algorithm in computing second or higher derivatives with large N. We also study several other aspects of the mapped Chebyshev differentiation matrix. We find that 1) the mapped Chebyshev methods requires much less than � points to resolve awave, 2) the eigenvalues are less sensitive to perturbation by roundoff error, and 3) larger time steps can be used for solving PDEs. All these advantages of the mapped Chebyshev methods can be achieved while maintaining spectral accuracy