345 research outputs found

    Faster K-Means Cluster Estimation

    Full text link
    There has been considerable work on improving popular clustering algorithm `K-means' in terms of mean squared error (MSE) and speed, both. However, most of the k-means variants tend to compute distance of each data point to each cluster centroid for every iteration. We propose a fast heuristic to overcome this bottleneck with only marginal increase in MSE. We observe that across all iterations of K-means, a data point changes its membership only among a small subset of clusters. Our heuristic predicts such clusters for each data point by looking at nearby clusters after the first iteration of k-means. We augment well known variants of k-means with our heuristic to demonstrate effectiveness of our heuristic. For various synthetic and real-world datasets, our heuristic achieves speed-up of up-to 3 times when compared to efficient variants of k-means.Comment: 6 pages, Accepted at ECIR 201

    Neural Network Methods for Boundary Value Problems Defined in Arbitrarily Shaped Domains

    Full text link
    Partial differential equations (PDEs) with Dirichlet boundary conditions defined on boundaries with simple geometry have been succesfuly treated using sigmoidal multilayer perceptrons in previous works. This article deals with the case of complex boundary geometry, where the boundary is determined by a number of points that belong to it and are closely located, so as to offer a reasonable representation. Two networks are employed: a multilayer perceptron and a radial basis function network. The later is used to account for the satisfaction of the boundary conditions. The method has been successfuly tested on two-dimensional and three-dimensional PDEs and has yielded accurate solutions

    Global kk-means++++: an effective relaxation of the global kk-means clustering algorithm

    Full text link
    The kk-means algorithm is a very prevalent clustering method because of its simplicity, effectiveness, and speed, but its main disadvantage is its high sensitivity to the initial positions of the cluster centers. The global kk-means is a deterministic algorithm proposed to tackle the random initialization problem of k-means but requires high computational cost. It partitions the data to KK clusters by solving all kk-means sub-problems incrementally for k=1,…,Kk=1,\ldots, K. For each kk cluster problem, the method executes the kk-means algorithm NN times, where NN is the number of data points. In this paper, we propose the global kk-means++++ clustering algorithm, which is an effective way of acquiring quality clustering solutions akin to those of global kk-means with a reduced computational load. This is achieved by exploiting the center section probability that is used in the effective kk-means++++ algorithm. The proposed method has been tested and compared in various well-known real and synthetic datasets yielding very satisfactory results in terms of clustering quality and execution speed

    A Variational Approach for Bayesian Blind Image Deconvolution

    Full text link

    A Set Membership Approach to Discovering Feature Relevance and Explaining Neural Classifier Decisions

    Full text link
    Neural classifiers are non linear systems providing decisions on the classes of patterns, for a given problem they have learned. The output computed by a classifier for each pattern constitutes an approximation of the output of some unknown function, mapping pattern data to their respective classes. The lack of knowledge of such a function along with the complexity of neural classifiers, especially when these are deep learning architectures, do not permit to obtain information on how specific predictions have been made. Hence, these powerful learning systems are considered as black boxes and in critical applications their use tends to be considered inappropriate. Gaining insight on such a black box operation constitutes a one way approach in interpreting operation of neural classifiers and assessing the validity of their decisions. In this paper we tackle this problem introducing a novel methodology for discovering which features are considered relevant by a trained neural classifier and how they affect the classifier's output, thus obtaining an explanation on its decision. Although, feature relevance has received much attention in the machine learning literature here we reconsider it in terms of nonlinear parameter estimation targeted by a set membership approach which is based on interval analysis. Hence, the proposed methodology builds on sound mathematical approaches and the results obtained constitute a reliable estimation of the classifier's decision premises

    Artificial Neural Networks for Solving Ordinary and Partial Differential Equations

    Full text link
    We present a method to solve initial and boundary value problems using artificial neural networks. A trial solution of the differential equation is written as a sum of two parts. The first part satisfies the boundary (or initial) conditions and contains no adjustable parameters. The second part is constructed so as not to affect the boundary conditions. This part involves a feedforward neural network, containing adjustable parameters (the weights). Hence by construction the boundary conditions are satisfied and the network is trained to satisfy the differential equation. The applicability of this approach ranges from single ODE's, to systems of coupled ODE's and also to PDE's. In this article we illustrate the method by solving a variety of model problems and present comparisons with finite elements for several cases of partial differential equations.Comment: LAtex file, 26 pages, 21 figs, submitted to IEEE TN

    Artificial Neural Network Methods in Quantum Mechanics

    Full text link
    In a previous article we have shown how one can employ Artificial Neural Networks (ANNs) in order to solve non-homogeneous ordinary and partial differential equations. In the present work we consider the solution of eigenvalue problems for differential and integrodifferential operators, using ANNs. We start by considering the Schr\"odinger equation for the Morse potential that has an analytically known solution, to test the accuracy of the method. We then proceed with the Schr\"odinger and the Dirac equations for a muonic atom, as well as with a non-local Schr\"odinger integrodifferential equation that models the n+αn+\alpha system in the framework of the resonating group method. In two dimensions we consider the well studied Henon-Heiles Hamiltonian and in three dimensions the model problem of three coupled anharmonic oscillators. The method in all of the treated cases proved to be highly accurate, robust and efficient. Hence it is a promising tool for tackling problems of higher complexity and dimensionality.Comment: Latex file, 29pages, 11 psfigs, submitted in CP

    Supersymmetric hybrid inflation in the braneworld scenario

    Get PDF
    In this paper we reconsider the supersymmetric hybrid inflation in the context of the braneworld scenario . The observational bounds are satisfied with an inflationary energy scale μ≃4×10−4Mp\mu\simeq 4\times 10^{-4}M_p, without any fine-tuning of the coupling parameter, provided that the five-dimensional Planck scale is M5∼<2×10−3MpM_5\stackrel{<}{\sim} 2\times 10^{-3}M_p . We have also obtained an upper bound on the the brane tension .Comment: 8 pages (Latex
    • …
    corecore