3,809 research outputs found

    On the Hilbert transform of wavelets

    Full text link
    A wavelet is a localized function having a prescribed number of vanishing moments. In this correspondence, we provide precise arguments as to why the Hilbert transform of a wavelet is again a wavelet. In particular, we provide sharp estimates of the localization, vanishing moments, and smoothness of the transformed wavelet. We work in the general setting of non-compactly supported wavelets. Our main result is that, in the presence of some minimal smoothness and decay, the Hilbert transform of a wavelet is again as smooth and oscillating as the original wavelet, whereas its localization is controlled by the number of vanishing moments of the original wavelet. We motivate our results using concrete examples.Comment: Appears in IEEE Transactions on Signal Processing, vol. 59, no. 4, pp. 1890-1894, 201

    Predicted Residual Error Sum of Squares of Mixed Models: An Application for Genomic Prediction.

    Get PDF
    Genomic prediction is a statistical method to predict phenotypes of polygenic traits using high-throughput genomic data. Most diseases and behaviors in humans and animals are polygenic traits. The majority of agronomic traits in crops are also polygenic. Accurate prediction of these traits can help medical professionals diagnose acute diseases and breeders to increase food products, and therefore significantly contribute to human health and global food security. The best linear unbiased prediction (BLUP) is an important tool to analyze high-throughput genomic data for prediction. However, to judge the efficacy of the BLUP model with a particular set of predictors for a given trait, one has to provide an unbiased mechanism to evaluate the predictability. Cross-validation (CV) is an essential tool to achieve this goal, where a sample is partitioned into K parts of roughly equal size, one part is predicted using parameters estimated from the remaining K - 1 parts, and eventually every part is predicted using a sample excluding that part. Such a CV is called the K-fold CV. Unfortunately, CV presents a substantial increase in computational burden. We developed an alternative method, the HAT method, to replace CV. The new method corrects the estimated residual errors from the whole sample analysis using the leverage values of a hat matrix of the random effects to achieve the predicted residual errors. Properties of the HAT method were investigated using seven agronomic and 1000 metabolomic traits of an inbred rice population. Results showed that the HAT method is a very good approximation of the CV method. The method was also applied to 10 traits in 1495 hybrid rice with 1.6 million SNPs, and to human height of 6161 subjects with roughly 0.5 million SNPs of the Framingham heart study data. Predictabilities of the HAT and CV methods were all similar. The HAT method allows us to easily evaluate the predictabilities of genomic prediction for large numbers of traits in very large populations

    A sparse-grid isogeometric solver

    Full text link
    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90s in the context of the approximation of high-dimensional PDEs. The tests that we report show that, in accordance to the literature, a sparse-grid construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.Comment: updated version after revie

    The Surface Laplacian Technique in EEG: Theory and Methods

    Full text link
    This paper reviews the method of surface Laplacian differentiation to study EEG. We focus on topics that are helpful for a clear understanding of the underlying concepts and its efficient implementation, which is especially important for EEG researchers unfamiliar with the technique. The popular methods of finite difference and splines are reviewed in detail. The former has the advantage of simplicity and low computational cost, but its estimates are prone to a variety of errors due to discretization. The latter eliminates all issues related to discretization and incorporates a regularization mechanism to reduce spatial noise, but at the cost of increasing mathematical and computational complexity. These and several others issues deserving further development are highlighted, some of which we address to the extent possible. Here we develop a set of discrete approximations for Laplacian estimates at peripheral electrodes and a possible solution to the problem of multiple-frame regularization. We also provide the mathematical details of finite difference approximations that are missing in the literature, and discuss the problem of computational performance, which is particularly important in the context of EEG splines where data sets can be very large. Along this line, the matrix representation of the surface Laplacian operator is carefully discussed and some figures are given illustrating the advantages of this approach. In the final remarks, we briefly sketch a possible way to incorporate finite-size electrodes into Laplacian estimates that could guide further developments.Comment: 43 pages, 8 figure

    Data acquisition and analysis of range-finding systems for spacing construction

    Get PDF
    For space missions of future, completely autonomous robotic machines will be required to free astronauts from routine chores of equipment maintenance, servicing of faulty systems, etc. and to extend human capabilities in hazardous environments full of cosmic and other harmful radiations. In places of high radiation and uncontrollable ambient illuminations, T.V. camera based vision systems cannot work effectively. However, a vision system utilizing directly measured range information with a time of flight laser rangefinder, can successfully operate under these environments. Such a system will be independent of proper illumination conditions and the interfering effects of intense radiation of all kinds will be eliminated by the tuned input of the laser instrument. Processing the range data according to certain decision, stochastic estimation and heuristic schemes, the laser based vision system will recognize known objects and thus provide sufficient information to the robot's control system which can develop strategies for various objectives

    Comparison and Evaluation of Didactic Methods in Numerical Analysis for the Teaching of Cubic Spline Interpolation

    Get PDF
    In mathematical education it is crucial to have a good teaching plan and to execute it correctly. In particular, this is true in the field of numerical analysis. Every teacher has a different style of teaching. This thesis studies how the basic material of a particular topic in numerical analysis was developed in four different textbooks. We compare and evaluate this process in order to achieve a good teaching strategy. The topic we chose for this research is cubic spline interpolation. Although this topic is a basic one in numerical analysis it may be complicated for students to understand. The aim of the thesis is to analyze the effectiveness of different approaches of teaching cubic spline interpolation and then use this insight to write our own chapter. We intend to channel every-day thinking into a more technical/practical presentation of a topic in numerical analysis. The didactic methodology that we use here can be extended to cover other topics in numerical analysis.Methods of teaching mathematics are different for several reasons, for example, the presentation style of teacher of a particular topic. In several books we can observe a different approach of presentation material of a topic, and at the end we can produce a unique way of teaching but in a different way. In our thesis we study different approaches to teaching in a several numerical analysis books in the topic of cubic spline interpolation. What is cubic spline interpolation? Cubic spline interpolation is a type of interpolation of data points. Interpolation is a method of constructing a curve between some data points. We chose cubic spline interpolation because it is better than other kinds of interpolation. Cubic spline interpolation has a smaller curvature compared with other types of interpolation. Therefore, cubic spline interpolation produces a smooth curve. In this research we study different approaches of teaching cubic spline interpolation to find a good way for presenting the cubic spline interpolation topic, because this topic may be complicated for students to understand. To reach a good process of presentation of cubic spline interpolation we compare each part of different approaches in the books we have studied for teaching cubic spline interpolation by asking questions and then answering those questions. In this way we will show how we can evaluate each answer. Evaluating each answer we will obtain a good result which will prepare us for writing our own chapter in order to present cubic spline interpolation in our way
    • …
    corecore