5,066 research outputs found

    The Parameter-Less Self-Organizing Map algorithm

    Get PDF
    The Parameter-Less Self-Organizing Map (PLSOM) is a new neural network algorithm based on the Self-Organizing Map (SOM). It eliminates the need for a learning rate and annealing schemes for learning rate and neighbourhood size. We discuss the relative performance of the PLSOM and the SOM and demonstrate some tasks in which the SOM fails but the PLSOM performs satisfactory. Finally we discuss some example applications of the PLSOM and present a proof of ordering under certain limited conditions.Comment: 29 pages, 27 figures. Based on publication in IEEE Trans. on Neural Network

    Geometry of Morphogenesis

    Full text link
    We introduce a formalism for the geometry of eukaryotic cells and organisms.Cells are taken to be star-convex with good biological reason. This allows for a convenient description of their extent in space as well as all manner of cell surface gradients. We assume that a spectrum of such cell surface markers determines an epigenetic code for organism shape. The union of cells in space at a moment in time is by definition the organism taken as a metric subspace of Euclidean space, which can be further equipped with an arbitrary measure. Each cell determines a point in space thus assigning a finite configuration of distinct points in space to an organism, and a bundle over this configuration space is introduced with fiber a Hilbert space recording specific epigenetic data. On this bundle, a Lagrangian formulation of morphogenetic dynamics is proposed based on Gromov-Hausdorff distance which at once describes both embryo development and regenerative growth

    Mean-field optimal control and optimality conditions in the space of probability measures

    Get PDF
    We derive a framework to compute optimal controls for problems with states in the space of probability measures. Since many optimal control problems constrained by a system of ordinary differential equations (ODE) modelling interacting particles converge to optimal control problems constrained by a partial differential equation (PDE) in the mean-field limit, it is interesting to have a calculus directly on the mesoscopic level of probability measures which allows us to derive the corresponding first-order optimality system. In addition to this new calculus, we provide relations for the resulting system to the first-order optimality system derived on the particle level, and the first-order optimality system based on L2L^2-calculus under additional regularity assumptions. We further justify the use of the L2L^2-adjoint in numerical simulations by establishing a link between the adjoint in the space of probability measures and the adjoint corresponding to L2L^2-calculus. Moreover, we prove a convergence rate for the convergence of the optimal controls corresponding to the particle formulation to the optimal controls of the mean-field problem as the number of particles tends to infinity

    Pointwise convergence of the Lloyd algorithm in higher dimension

    Get PDF
    We establish the pointwise convergence of the iterative Lloyd algorithm, also known as kk-means algorithm, when the quadratic quantization error of the starting grid (with size N≥2N\ge 2) is lower than the minimal quantization error with respect to the input distribution is lower at level N−1N-1. Such a protocol is known as the splitting method and allows for convergence even when the input distribution has an unbounded support. We also show under very light assumption that the resulting limiting grid still has full size NN. These results are obtained without continuity assumption on the input distribution. A variant of the procedure taking advantage of the asymptotic of the optimal quantizer radius is proposed which always guarantees the boundedness of the iterated grids

    A Survey of Adaptive Resonance Theory Neural Network Models for Engineering Applications

    Full text link
    This survey samples from the ever-growing family of adaptive resonance theory (ART) neural network models used to perform the three primary machine learning modalities, namely, unsupervised, supervised and reinforcement learning. It comprises a representative list from classic to modern ART models, thereby painting a general picture of the architectures developed by researchers over the past 30 years. The learning dynamics of these ART models are briefly described, and their distinctive characteristics such as code representation, long-term memory and corresponding geometric interpretation are discussed. Useful engineering properties of ART (speed, configurability, explainability, parallelization and hardware implementation) are examined along with current challenges. Finally, a compilation of online software libraries is provided. It is expected that this overview will be helpful to new and seasoned ART researchers
    • …
    corecore