254 research outputs found

    On error growth in the Bartels-Golub and Fletcher-Matthews algorithms for updating matrix factorizations

    Get PDF
    AbstractThe Bartels-Golub and Fletcher-Matthews methods are highly useful in iterative algorithms for constrained optimization calculations. We ask whether large error growth can occur if unfavorable conditions persist for several iterations. Pathological examples show serious loss of accuracy, which is worse for the Fletcher-Matthews algorithm. However, the calculations that may have large errors are less easy to recognize automatically if the Bartels-Golub algorithm is preferred. Some numerical results confirm the growth of errors, but they are atypical of ordinary calculations

    A Multiclassifier Approach for Drill Wear Prediction

    Get PDF
    Classification methods have been widely used during last years in order to predict patterns and trends of interest in data. In present paper, a multiclassifier approach that combines the output of some of the most popular data mining algorithms is shown. The approach is based on voting criteria, by estimating the confidence distributions of each algorithm individually and combining them according to three different methods: confidence voting, weighted voting and majority voting. To illustrate its applicability in a real problem, the drill wear detection in machine-tool sector is addressed. In this study, the accuracy obtained by each isolated classifier is compared with the performance of the multiclassifier when characterizing the patterns of interest involved in the drilling process and predicting the drill wear. Experimental results show that, in general, false positives obtained by the classifiers can be slightly reduced by using the multiclassifier approach

    Isolating Stock Prices Variation with Neural Networks

    Get PDF
    In this study we aim to define a mapping function that relates the general index value among a set of shares to the prices of individual shares. In more general terms this is problem of defining the relationship between multivariate data distributions and a specific source of variation within these distributions where the source of variation in question represents a quantity of interest related to a particular problem domain. In this respect we aim to learn a complex mapping function that can be used for mapping different values of the quantity of interest to typical novel samples of the distribution. In our investigation we compare the performance of standard neural network based methods like Multilayer Perceptrons (MLPs) and Radial Basis Functions (RBFs) as well as Mixture Density Networks (MDNs) and a latent variable method, the General Topographic Mapping (GTM). As a reference benchmark of the prediction accuracy we consider a simple method based on the average values over certain intervals of the quantity of interest that we are trying to isolate (the so called Sample Average (SA) method). According to the results, MLPs and RBFs outperform MDNs and the GTM for this one-to-many mapping problem

    Parallelization of the discrete gradient method of non-smooth optimization and its applications

    Full text link
    We investigate parallelization and performance of the discrete gradient method of nonsmooth optimization. This derivative free method is shown to be an effective optimization tool, able to skip many shallow local minima of nonconvex nondifferentiable objective functions. Although this is a sequential iterative method, we were able to parallelize critical steps of the algorithm, and this lead to a significant improvement in performance on multiprocessor computer clusters. We applied this method to a difficult polyatomic clusters problem in computational chemistry, and found this method to outperform other algorithms. <br /

    Correlation Testing in Nuclear Density Functional Theory

    Full text link
    Correlation testing provides a quick method of discriminating amongst potential terms to include in a nuclear mass formula or functional and is a necessary tool for further nuclear mass models; however a firm mathematical foundation of the method has not been previously set forth. Here, the necessary justification for correlation testing is developed and more detail of the motivation behind its use is give. Examples are provided to clarify the method analytically and for computational benchmarking. We provide a quantitative demonstration of the method's performance and short-comings, highlighting also potential issues a user may encounter. In concluding we suggest some possible future developments to improve the limitations of the method.Comment: Accepted to EPJ-

    A bootstrap method for sum-of-poles approximations

    Get PDF
    A bootstrap method is presented for finding efficient sum-of-poles approximations of causal functions. The method is based on a recursive application of the nonlinear least squares optimization scheme developed in (Alpert et al. in SIAM J. Numer. Anal. 37:1138–1164, 2000), followed by the balanced truncation method for model reduction in computational control theory as a final optimization step. The method is expected to be useful for a fairly large class of causal functions encountered in engineering and applied physics. The performance of the method and its application to computational physics are illustrated via several numerical examples
    corecore