16 research outputs found
Approximation with Random Bases: Pro et Contra
In this work we discuss the problem of selecting suitable approximators from
families of parameterized elementary functions that are known to be dense in a
Hilbert space of functions. We consider and analyze published procedures, both
randomized and deterministic, for selecting elements from these families that
have been shown to ensure the rate of convergence in norm of order
, where is the number of elements. We show that both randomized and
deterministic procedures are successful if additional information about the
families of functions to be approximated is provided. In the absence of such
additional information one may observe exponential growth of the number of
terms needed to approximate the function and/or extreme sensitivity of the
outcome of the approximation to parameters. Implications of our analysis for
applications of neural networks in modeling and control are illustrated with
examples.Comment: arXiv admin note: text overlap with arXiv:0905.067
The Blessing of Dimensionality:Separation Theorems in the Thermodynamic Limit
We consider and analyze properties of large sets of randomly selected (i.i.d.) points in high dimensional spaces. In particular, we consider the problem of whether a single data point that is randomly chosen from a finite set of points can be separated from the rest of the data set by a linear hyperplane. We formulate and prove stochastic separation theorems, including: 1) with probability close to one a random point may be separated from a finite random set by a linear functional; 2) with probability close to one for every point in a finite random set there is a linear functional separating this point from the rest of the data. The total number of points in the random sets are allowed to be exponentially large with respect to dimension. Various laws governing distributions of points are considered, and explicit formulae for the probability of separation are provided. These theorems reveal an interesting implication for machine learning and data mining applications that deal with large data sets (big data) and high-dimensional data (many attributes): simple linear decision rules and learning machines are surprisingly efficient tools for separating and filtering out arbitrarily assigned points in large dimensions.</p
Semi-passivity and synchronization of diffusively coupled neuronal oscillators
We discuss synchronization in networks of neuronal oscillators which are interconnected via diffusive coupling, i.e. linearly coupled via gap junctions. In particular, we present sufficient conditions for synchronization in these networks using the theory of semi-passive and passive systems. We show that the conductance based neuronal models of Hodgkin-Huxley, Morris-Lecar, and the popular reduced models of FitzHugh-Nagumo and Hindmarsh-Rose all satisfy a semi-passivity property, i.e. that is the state trajectories of such a model remain oscillatory but bounded provided that the supplied (electrical) energy is bounded. As a result, for a wide range of coupling configurations, networks of these oscillators are guaranteed to possess ultimately bounded solutions. Moreover, we demonstrate that when the coupling is strong enough the oscillators become synchronized. Our theoretical conclusions are confirmed by computer simulations with coupled Hindmarsh-Rose and Morris-Lecar oscillators. Finally we discuss possible "instabilities" in networks of oscillators induced by the diffusive coupling. © 2009 Elsevier B.V. All rights reserved
Invariant template matching in systems with spatiotemporal coding: A matter of instability
We consider the design principles of algorithms that match templates to images subject to spatiotemporal encoding. Both templates and images are encoded as temporal sequences of samplings from spatial patterns. Matching is required to be tolerant to various combinations of image perturbations. These include ones that can be modeled as parameterized uncertainties such as image blur, luminance, and, as special cases, invariant transformation groups such as translation and rotations, as well as unmodeled uncertainties (noise). For a system to deal with such perturbations in an efficient way, they are to be handled through a minimal number of channels and by simple adaptation mechanisms. These normative requirements can be met within the mathematical framework of weakly attracting sets. We discuss explicit implementation of this principle in neural systems and show that it naturally explains a range of phenomena in biological vision, such as mental rotation, visual search, and the presence of multiple time scales in adaptation. We illustrate our results with an application to a realistic pattern recognition problem
Reconstructing dynamics of spiking neurons from input-output measurements in vitro
We provide a method to reconstruct the neural spike-timing behavior from input-output measurements. The proposed method ensures an accurate fit of a class of neuronal models to the relevant data, which are in our case the dynamics of the neuron’s membrane potential. Our method enables us to deal with the problem that the neuronal models, in general, not belong to the class of models that can be transformed into the observer canonical form. In particular, we present a technique that guarantees successful model reconstruction of the spiking behavior for an extended Hindmarsh-Rose neuronal model. The technique is validated on the data recorded in vitro from neural
cells in the hippocampal area of mouse brain
Spatially constrained adaptive rewiring in cortical networks creates spatially modular small world architectures
A modular small-world topology in functional and anatomical networks of the cortex is eminently suitable as an information processing architecture. This structure was shown in model studies to arise adaptively; it emerges through rewiring of network connections according to patterns of synchrony in ongoing oscillatory neural activity. However, in order to improve the applicability of such models to the cortex, spatial characteristics of cortical connectivity need to be respected, which were previously neglected. For this purpose we consider networks endowed with a metric by embedding them into a physical space. We provide an adaptive rewiring model with a spatial distance function and a corresponding spatially local rewiring bias. The spatially constrained adaptive rewiring principle is able to steer the evolving network topology to small world status, even more consistently so than without spatial constraints. Locally biased adaptive rewiring results in a spatial layout of the connectivity structure, in which topologically segregated modules correspond to spatially segregated regions, and these regions are linked by long-range connections. The principle of locally biased adaptive rewiring, thus, may explain both the topological connectivity structure and spatial distribution of connections between neuronal units in a large-scale cortical architecture
Non-uniform small-gain theorems for systems with unstable invariant sets
We consider the problem of small-gain analysis of asymptotic behavior in interconnected nonlinear dynamic systems. Mathematical models of these systems are allowed to be uncertain and time-varying. In contrast to standard small-gain theorems that require global asymptotic stability of each interacting component in the absence of inputs, we consider interconnections of systems that can be critically stable and have infinite input-output Linfin gains. For this class of systems we derive small-gain conditions specifying state boundedness of the interconnection. The estimates of the domain in which the systempsilas state remains are also provided. Conditions that follow from the main results of our paper are non-uniform in space. That is they hold generally only for a set of initial conditions in the systempsilas state space. We show that under some mild continuity restrictions this set has a non-zero volume, hence such bounded yet potentially globally unstable motions are realizable with a non-zero probability. Proposed results can be used for the design and analysis of intermittent, itinerant and meta-stable dynamics which is the case in the domains of control of chemical kinetics, biological and complex physical systems, and non-linear optimization
One-trial correction of legacy AI systems and stochastic separation theorems
We consider the problem of efficient “on the fly” tuning of existing, or legacy, Artificial Intelligence (AI) systems. The legacy AI systems are allowed to be of arbitrary class, albeit the data they are using for computing interim or final decision responses should posses an underlying structure of a high-dimensional topological real vector space. The tuning method that we propose enables dealing with errors without the need to re-train the system. Instead of re-training a simple cascade of perceptron nodes is added to the legacy system. The added cascade modulates the AI legacy system’s decisions. If applied repeatedly, the process results in a network of modulating rules “dressing up” and improving performance of existing AI systems. Mathematical rationale behind the method is based on the fundamental property of measure concentration in high dimensional spaces. The method is illustrated with an example of fine-tuning a deep convolutional network that has been pre-trained to detect pedestrians in images