254 research outputs found
On error growth in the Bartels-Golub and Fletcher-Matthews algorithms for updating matrix factorizations
AbstractThe Bartels-Golub and Fletcher-Matthews methods are highly useful in iterative algorithms for constrained optimization calculations. We ask whether large error growth can occur if unfavorable conditions persist for several iterations. Pathological examples show serious loss of accuracy, which is worse for the Fletcher-Matthews algorithm. However, the calculations that may have large errors are less easy to recognize automatically if the Bartels-Golub algorithm is preferred. Some numerical results confirm the growth of errors, but they are atypical of ordinary calculations
A Multiclassifier Approach for Drill Wear Prediction
Classification methods have been widely used during last years in order to predict patterns and trends of interest in data. In present paper, a multiclassifier approach that combines the output of some of the most popular data mining algorithms is shown. The approach is based on voting criteria, by estimating the confidence distributions of each algorithm individually and combining them according to three different methods: confidence voting, weighted voting and majority voting. To illustrate its applicability in a real problem, the drill wear detection in machine-tool sector is addressed. In this study, the accuracy obtained by each isolated classifier is compared with the performance of the multiclassifier when characterizing the patterns of interest involved in the drilling process and predicting the drill wear. Experimental results show that, in general, false positives obtained by the classifiers can be slightly reduced by using the multiclassifier approach
Isolating Stock Prices Variation with Neural Networks
In this study we aim to define a mapping function that relates the general index value among a set of shares to the prices of individual shares. In more general terms this is problem of defining the relationship between multivariate data distributions and a specific source of variation within these distributions where the source of variation in question represents a quantity of interest related to a particular problem domain. In this respect we aim to learn a complex mapping function that can be used for mapping different values of the quantity of interest to typical novel samples of the distribution. In our investigation we compare the performance of standard neural network based methods like Multilayer Perceptrons (MLPs) and Radial Basis Functions (RBFs) as well as Mixture Density Networks (MDNs) and a latent variable method, the General Topographic Mapping (GTM). As a reference benchmark of the prediction accuracy we consider a simple method based on the average values over certain intervals of the quantity of interest that we are trying to isolate (the so called Sample Average (SA) method). According to the results, MLPs and RBFs outperform MDNs and the GTM for this one-to-many mapping problem
Parallelization of the discrete gradient method of non-smooth optimization and its applications
We investigate parallelization and performance of the discrete gradient method of nonsmooth optimization. This derivative free method is shown to be an effective optimization tool, able to skip many shallow local minima of nonconvex nondifferentiable objective functions. Although this is a sequential iterative method, we were able to parallelize critical steps of the algorithm, and this lead to a significant improvement in performance on multiprocessor computer clusters. We applied this method to a difficult polyatomic clusters problem in computational chemistry, and found this method to outperform other algorithms. <br /
Correlation Testing in Nuclear Density Functional Theory
Correlation testing provides a quick method of discriminating amongst
potential terms to include in a nuclear mass formula or functional and is a
necessary tool for further nuclear mass models; however a firm mathematical
foundation of the method has not been previously set forth. Here, the necessary
justification for correlation testing is developed and more detail of the
motivation behind its use is give. Examples are provided to clarify the method
analytically and for computational benchmarking. We provide a quantitative
demonstration of the method's performance and short-comings, highlighting also
potential issues a user may encounter. In concluding we suggest some possible
future developments to improve the limitations of the method.Comment: Accepted to EPJ-
Recommended from our members
Construction of radial basis function networks with diversified topologies
In this review we bring together some of our recent work from the angle of the diversified RBF topologies, including three different topologies; (i) the RBF network with tunable nodes; (ii) the Box-Cox output transformation based RBF network (Box-Cox RBF); and (iii) the RBF network with boundary value constraints (BVC-RBF). We show that the modified topologies have some advantages over the conventional RBF topology for specific problems. For each modified topology, the model construction algorithms have been developed. These proposed RBF topologies are respectively aimed at enhancing the modelling capabilities of; (i)flexible basis function shaping for improved model generalisation with the minimal model;(ii) effectively handling some dynamical processes in which the model residuals exhibit heteroscedasticity; and (iii) achieving automatic constraints satisfaction so as to incorporate deterministic prior knowledge with ease. It is shown that it is advantageous that the linear learning algorithms, e.g. the orthogonal forward selection (OFS) algorithm based leave-one-out (LOO) criteria, are still applicable as part of the proposed algorithms
A bootstrap method for sum-of-poles approximations
A bootstrap method is presented for finding efficient sum-of-poles approximations of causal functions. The method is based on a recursive application of the nonlinear least squares optimization scheme developed in (Alpert et al. in SIAM J. Numer. Anal. 37:1138–1164, 2000), followed by the balanced truncation method for model reduction in computational control theory as a final optimization step. The method is expected to be useful for a fairly large class of causal functions encountered in engineering and applied physics. The performance of the method and its application to computational physics are illustrated via several numerical examples
- …