85 research outputs found

    Univariate interpolation by exponential functions and gaussian RBFs for generic sets of nodes

    Get PDF
    We consider interpolation of univariate functions on arbitrary sets of nodes by Gaussian radial basis functions or by exponential functions. We derive closed-form expressions for the interpolation error based on the Harish-Chandra-Itzykson-Zuber formula. We then prove the exponential convergence of interpolation for functions analytic in a sufficiently large domain. As an application, we prove the global exponential convergence of optimization by expected improvement for such functions.Comment: Some stylistic improvements and added references following feedback from the reviewer

    Regression methods in waveform modeling: a comparative study

    No full text
    Gravitational-wave astronomy of compact binaries relies on theoretical models of the gravitational-wave signal that is emitted as binaries coalesce. These models do not only need to be accurate, they also have to be fast to evaluate in order to be able to compare millions of signals in near real time with the data of gravitational-wave instruments. A variety of regression and interpolation techniques have been employed to build efficient waveform models, but no study has systematically compared the performance of these regression methods yet. Here we provide such a comparison of various techniques, including polynomial fits, radial basis functions, Gaussian process regression and artificial neural networks, specifically for the case of gravitational waveform modeling. We use all these techniques to regress analytical models of non-precessing and precessing binary black hole waveforms, and compare the accuracy as well as computational speed. We find that most regression methods are reasonably accurate, but efficiency considerations favour in many cases the most simple approach. We conclude that sophisticated regression methods are not necessarily needed in standard gravitational-wave modeling applications, although problems with higher complexity than what is tested here might be more suitable for machine-learning techniques and more sophisticated methods may have side benefits

    Some Basis Function Methods for Surface Approximation

    Get PDF
    This thesis considers issues in surface reconstruction such as identifying approximation methods that work well for certain applications and developing efficient methods to compute and manipulate these approximations. The first part of the thesis illustrates a new fast evaluation scheme to efficiently calculate thin-plate splines in two dimensions. In the fast multipole method scheme, exponential expansions/approximations are used as an intermediate step in converting far field series to local polynomial approximations. The contributions here are extending the scheme to the thin-plate spline and a new error analysis. The error analysis covers the practically important case where truncated series are used throughout, and through off line computation of error constants gives sharp error bounds. In the second part of this thesis, we investigates fitting a surface to an object using blobby models as a coarse level approximation. The aim is to achieve a given quality of approximation with relatively few parameters. This process involves an optimization procedure where a number of blobs (ellipses or ellipsoids) are separately fitted to a cloud of points. Then the optimized blobs are combined to yield an implicit surface approximating the cloud of points. The results for our test cases in 2 and 3 dimensions are very encouraging. For many applications, the coarse level blobby model itself will be sufficient. For example adding texture on top of the blobby surface can give a surprisingly realistic image. The last part of the thesis describes a method to reconstruct surfaces with known discontinuities. We fit a surface to the data points by performing a scattered data interpolation using compactly supported RBFs with respect to a geodesic distance. Techniques from computational geometry such as the visibility graph are used to compute the shortest Euclidean distance between two points, avoiding any obstacles. Results have shown that discontinuities on the surface were clearly reconstructed, and th

    I-theory on depth vs width: hierarchical function composition

    Get PDF
    Deep learning networks with convolution, pooling and subsampling are a special case of hierar- chical architectures, which can be represented by trees (such as binary trees). Hierarchical as well as shallow networks can approximate functions of several variables, in particular those that are com- positions of low dimensional functions. We show that the power of a deep network architecture with respect to a shallow network is rather independent of the specific nonlinear operations in the network and depends instead on the the behavior of the VC-dimension. A shallow network can approximate compositional functions with the same error of a deep network but at the cost of a VC-dimension that is exponential instead than quadratic in the dimensionality of the function. To complete the argument we argue that there exist visual computations that are intrinsically compositional. In particular, we prove that recognition invariant to translation cannot be computed by shallow networks in the presence of clutter. Finally, a general framework that includes the compositional case is sketched. The key con- dition that allows tall, thin networks to be nicer that short, fat networks is that the target input-output function must be sparse in a certain technical sense.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF - 1231216

    High-dimensional data driven parameterized macromodeling

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Meshless Methods for Option Pricing and Risks Computation

    Get PDF
    In this thesis we price several financial derivatives by means of radial basis functions. Our main contribution consists in extending the usage of said numerical methods to the pricing of more complex derivatives - such as American and basket options with barriers - and in computing the associated risks. First, we derive the mathematical expressions for the prices and the Greeks of given options; next, we implement the corresponding numerical algorithm in MATLAB and calculate the results. We compare our results to the most common techniques applied in practice such as Finite Differences and Monte Carlo methods. We mostly use real data as input for our examples. We conclude radial basis functions offer a valid alternative to current pricing methods, especially because of the efficiency deriving from the free, direct calculation of risks during the pricing process. Eventually, we provide suggestions for future research by applying radial basis function for an implied volatility surface reconstruction

    Multi-Objective Optimization of Mixed-Variable, Stochastic Systems Using Single-Objective Formulations

    Get PDF
    Many problems exist where one desires to optimize systems with multiple, often competing, objectives. Further, these problems may not have a closed form representation, and may also have stochastic responses. Recently, a method expanded mixed variable generalized pattern search/ranking and selection (MVPS-RS) and Mesh Adaptive Direct Search (MADS) developed for single-objective, stochastic problems to the multi-objective case by using aspiration and reservation levels. However, the success of this method in approximating the true Pareto solution set can be dependent upon several factors. These factors include the experimental design and ranges of the aspiration and reservation levels, and the approximation quality of the nadir point. Additionally, a termination criterion for this method does not yet exist. In this thesis, these aspects are explored. Furthermore, there may be alternatives or additions to this method that can save both computational time and function evaluations. These include the use of surrogates as approximating functions and the expansion of proven singleobjective formulations. In this thesis, two new approaches are developed that make use of all of these previous existing methods in combination

    Learning Theory and Approximation

    Get PDF
    The main goal of this workshop – the third one of this type at the MFO – has been to blend mathematical results from statistical learning theory and approximation theory to strengthen both disciplines and use synergistic effects to work on current research questions. Learning theory aims at modeling unknown function relations and data structures from samples in an automatic manner. Approximation theory is naturally used for the advancement and closely connected to the further development of learning theory, in particular for the exploration of new useful algorithms, and for the theoretical understanding of existing methods. Conversely, the study of learning theory also gives rise to interesting theoretical problems for approximation theory such as the approximation and sparse representation of functions or the construction of rich kernel reproducing Hilbert spaces on general metric spaces. This workshop has concentrated on the following recent topics: Pitchfork bifurcation of dynamical systems arising from mathematical foundations of cell development; regularized kernel based learning in the Big Data situation; deep learning; convergence rates of learning and online learning algorithms; numerical refinement algorithms to learning; statistical robustness of regularized kernel based learning
    corecore