208 research outputs found

    Tangled Nature: A model of emergent structure and temporal mode among co-evolving agents

    Full text link
    Understanding systems level behaviour of many interacting agents is challenging in various ways, here we'll focus on the how the interaction between components can lead to hierarchical structures with different types of dynamics, or causations, at different levels. We use the Tangled Nature model to discuss the co-evolutionary aspects connecting the microscopic level of the individual to the macroscopic systems level. At the microscopic level the individual agent may undergo evolutionary changes due to mutations of strategies. The micro-dynamics always run at a constant rate. Nevertheless, the system's level dynamics exhibit a completely different type of intermittent abrupt dynamics where major upheavals keep throwing the system between meta-stable configurations. These dramatic transitions are described by a log-Poisson time statistics. The long time effect is a collectively adapted of the ecological network. We discuss the ecological and macroevolutionary consequences of the adaptive dynamics and briefly describe work using the Tangled Nature framework to analyse problems in economics, sociology, innovation and sustainabilityComment: Invited contribution to Focus on Complexity in European Journal of Physics. 25 page, 1 figur

    Hierarchical Feature Learning

    Get PDF
    The success of many tasks depends on good feature representation which is often domain-specific and hand-crafted requiring substantial human effort. Such feature representation is not general, i.e. unsuitable for even the same task across multiple domains, let alone different tasks.To address these issues, a multilayered convergent neural architecture is presented for learning from repeating spatially and temporally coincident patterns in data at multiple levels of abstraction. The bottom-up weights in each layer are learned to encode a hierarchy of overcomplete and sparse feature dictionaries from space- and time-varying sensory data. Two algorithms are investigated: recursive layer-by-layer spherical clustering and sparse coding to learn feature hierarchies. The model scales to full-sized high-dimensional input data and to an arbitrary number of layers thereby having the capability to capture features at any level of abstraction. The model learns features that correspond to objects in higher layers and object-parts in lower layers.Learning features invariant to arbitrary transformations in the data is a requirement for any effective and efficient representation system, biological or artificial. Each layer in the proposed network is composed of simple and complex sublayers motivated by the layered organization of the primary visual cortex. When exposed to natural videos, the model develops simple and complex cell-like receptive field properties. The model can predict by learning lateral connections among the simple sublayer neurons. A topographic map to their spatial features emerges by minimizing the wiring length simultaneously with feature learning.The model is general-purpose, unsupervised and online. Operations in each layer of the model can be implemented in parallelized hardware, making it very efficient for real world applications

    Practical polynomial optimization through positivity certificates with and without denominators

    Get PDF
    Les certificats de positivité ou Positivstellens"atze fournissent des représentations de polynômes positifs sur des ensembles semialgébriques de basiques, c'est-à-dire des ensembles définis par un nombre fini d'inégalités polynomiales. Le célèbre Positivstellensatz de Putinar stipule que tout polynôme positif sur un ensemble semialgébrique basique fermé SS peut être écrit comme une combinaison pondérée linéaire des polynômes décrivant SS, sous une certaine condition sur SS légèrement plus forte que la compacité. Lorsqu'il est écrit comme ceci, il devient évident que le polynôme est positif sur SS, et donc cette description alternative fournit un certificat de positivité sur SS. De plus, comme les poids polynomiaux impliqués dans le Positivstellensatz de Putinar sont des sommes de carrés (SOS), de tels certificats de positivité permettent de concevoir des relaxations convexes basées sur la programmation semidéfinie pour résoudre des problèmes d'optimisation polynomiale (POP) qui surviennent dans diverses applications réelles, par exemple dans la gestion des réseaux d'énergie et l'apprentissage automatique pour n'en citer que quelques unes. Développée à l'origine par Lasserre, la hiérarchie des relaxations semidéfinies basée sur le Positivstellensatz de Putinar est appelée la emph{hiérarchie Moment-SOS}. Dans cette thèse, nous proposons des méthodes d'optimisation polynomiale basées sur des certificats de positivité impliquant des poids SOS spécifiques, sans ou avec dénominateurs.Positivity certificates or Positivstellens"atze provide representations of polynomials positive on basic semialgebraic sets, i.e., sets defined by finitely many polynomial inequalities. The famous Putinar's Positivstellensatz states that every positive polynomial on a basic closed semialgebraic set SS can be written as a linear weighted combination of the polynomials describing SS, under a certain condition on SS slightly stronger than compactness. When written in this it becomes obvious that the polynomial is positive on SS, and therefore this alternative description provides a certificate of positivity on SS. Moreover, as the polynomial weights involved in Putinar's Positivstellensatz are sums of squares (SOS), such Positivity certificates enable to design convex relaxations based on semidefinite programming to solve polynomial optimization problems (POPs) that arise in various real-life applications, e.g., in management of energy networks and machine learning to cite a few. Originally developed by Lasserre, the hierarchy of semidefinite relaxations based on Putinar's Positivstellensatz is called the emph{Moment-SOS hierarchy}. In this thesis, we provide polynomial optimization methods based on positivity certificates involving specific SOS weights, without or with denominators

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    A Theory of Networks for Appxoimation and Learning

    Get PDF
    Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data
    • …
    corecore