7 research outputs found

    A New Oscillating-Error Technique for Classifiers

    Full text link
    This paper describes a new method for reducing the error in a classifier. It uses an error correction update that includes the very simple rule of either adding or subtracting the error adjustment, based on whether the variable value is currently larger or smaller than the desired value. While a traditional neuron would sum the inputs together and then apply a function to the total, this new method can change the function decision for each input value. This gives added flexibility to the convergence procedure, where through a series of transpositions, variables that are far away can continue towards the desired value, whereas variables that are originally much closer can oscillate from one side to the other. Tests show that the method can successfully classify some benchmark datasets. It can also work in a batch mode, with reduced training times and can be used as part of a neural network architecture. Some comparisons with an earlier wave shape paper are also made

    An Approach for the Adaptive Solution of Optimization Problems Governed by Partial Differential Equations with Uncertain Coefficients

    Get PDF
    Using derivative based numerical optimization routines to solve optimization problems governed by partial differential equations (PDEs) with uncertain coefficients is computationally expensive due to the large number of PDE solves required at each iteration. In this thesis, I present an adaptive stochastic collocation framework for the discretization and numerical solution of these PDE constrained optimization problems. This adaptive approach is based on dimension adaptive sparse grid interpolation and employs trust regions to manage the adapted stochastic collocation models. Furthermore, I prove the convergence of sparse grid collocation methods applied to these optimization problems as well as the global convergence of the retrospective trust region algorithm under weakened assumptions on gradient inexactness. In fact, if one can bound the error between actual and modeled gradients using reliable and efficient a posteriori error estimators, then the global convergence of the proposed algorithm follows. Moreover, I describe a high performance implementation of my adaptive collocation and trust region framework using the C++ programming language with the Message Passing interface (MPI). Many PDE solves are required to accurately quantify the uncertainty in such optimization problems, therefore it is essential to appropriately choose inexpensive approximate models and large-scale nonlinear programming techniques throughout the optimization routine. Numerical results for the adaptive solution of these optimization problems are presented

    Multiscale Simulation of Polymeric Fluids using Sparse Grids

    Get PDF
    The numerical simulation of non-Newtonian fluids is of high practical relevance since most complex fluids developed in the chemical industry are not correctly modeled by classical fluid mechanics. In this thesis, we implement a multiscale multi-bead-spring chain model into the three-dimensional Navier-Stokes solver NaSt3DGPF developed at the Institute for Numerical Simulation, University of Bonn. It is the first implementation of such a high-dimensional model for non-Newtonian fluids into a three-dimensional flow solver. Using this model, we present novel simulation results for a square-square contraction flow problem. We then compare the results of our 3D simulations with experimental measurements from the literature and obtain a very good agreement. Up to now, high-dimensional multiscale approaches are hardly used in practical applications as they lead to computing times in the order of months even on massively parallel computers. This thesis combines two approaches to reduce this enormous computational complexity. First, we use a domain decomposition with MPI to allow for massively parallel computations. Second, we employ a dimension-adaptive sparse grid variant, the combination technique, to reduce the computational complexity of the multiscale model. Here, the combination technique is used in a general formulation that balances not only different discretization errors but also considers the accuracy of the mathematical model

    Науковий вісник Таврійського державного агротехнологічного університету: електронне наукове фахове видання. Вип. 8, т. 2

    Get PDF
    Представлені результати досліджень вчених у галузях галузевого машинобудування, енергетики, електротехніки, електромеханіки, харчових технологій, комп’ютерних наук та інформаційних технологій. Видання призначене для наукових працівників, викладачів, аспірантів, інженерно-технічного персоналу і студентів, які спеціалізуються у відповідних або суміжних галузях науки та напрямках виробництва

    M.: Classification with sparse grids using simplicial basis functions. Intelligent Data Analysis 6 (2002) 483–502 (shortened version appeared

    No full text
    Recently we presented a new approach [20] to the classification problem arising in data mining. It is based on the regularization network approach but in contrast to other methods, which employ ansatz functions associated to data points, we use a grid in the usually high-dimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [52]. Thus, only O(h −1 n n d−1) instead of O(h −d n) grid points and unknowns are involved. Here d denotes the dimension of the feature space and hn = 2 −n gives the mesh size. We use the sparse grid combination technique [30] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. The method computes a nonlinear classifier but scales only linearly with the number of data points and is well suited for data mining applications where the amount of data is very large, but where the dimension of the feature space is moderately high. In contrast to our former work, where d-linear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point. We further extend the method to so-called anisotropic sparse grids, where now different a-priori chosen mesh sizes can be used for the discretization of each attribute. This can improve the run time of the method and the approximation results in the case of data sets with different importance of the attributes. We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 14 dimensions. We show that our new method achieves correctness rates which are competitive to those of the best existing methods
    corecore