374 research outputs found

    Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking

    Get PDF
    This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications

    The Self-Organization of Speech Sounds

    Get PDF
    The speech code is a vehicle of language: it defines a set of forms used by a community to carry information. Such a code is necessary to support the linguistic interactions that allow humans to communicate. How then may a speech code be formed prior to the existence of linguistic interactions? Moreover, the human speech code is discrete and compositional, shared by all the individuals of a community but different across communities, and phoneme inventories are characterized by statistical regularities. How can a speech code with these properties form? We try to approach these questions in the paper, using the ``methodology of the artificial''. We build a society of artificial agents, and detail a mechanism that shows the formation of a discrete speech code without pre-supposing the existence of linguistic capacities or of coordinated interactions. The mechanism is based on a low-level model of sensory-motor interactions. We show that the integration of certain very simple and non language-specific neural devices leads to the formation of a speech code that has properties similar to the human speech code. This result relies on the self-organizing properties of a generic coupling between perception and production within agents, and on the interactions between agents. The artificial system helps us to develop better intuitions on how speech might have appeared, by showing how self-organization might have helped natural selection to find speech

    The very same thing: Extending the object token concept to incorporate causal constraints on individual identity

    Get PDF
    The contributions of feature recognition, object categorization, and recollection of episodic memories to the re-identification of a perceived object as the very same thing encountered in a previous perceptual episode are well understood in terms of both cognitive-behavioral phenomenology and neurofunctional implementation. Human beings do not, however, rely solely on features and context to re-identify individuals; in the presence of featural change and similarly-featured distractors, people routinely employ causal constraints to establish object identities. Based on available cognitive and neurofunctional data, the standard object-token based model of individual re-identification is extended to incorporate the construction of unobserved and hence fictive causal histories (FCHs) of observed objects by the pre-motor action planning system. Cognitive-behavioral and implementation-level predictions of this extended model and methods for testing them are outlined. It is suggested that functional deficits in the construction of FCHs are associated with clinical outcomes in both Autism Spectrum Disorders and later-stage stage Alzheimer's disease.\u

    Neural Ordinary Differential Equation Control of Dynamics on Graphs

    Full text link
    We study the ability of neural networks to calculate feedback control signals that steer trajectories of continuous time non-linear dynamical systems on graphs, which we represent with neural ordinary differential equations (neural ODEs). To do so, we present a neural-ODE control (NODEC) framework and find that it can learn feedback control signals that drive graph dynamical systems into desired target states. While we use loss functions that do not constrain the control energy, our results show, in accordance with related work, that NODEC produces low energy control signals. Finally, we evaluate the performance and versatility of NODEC against well-known feedback controllers and deep reinforcement learning. We use NODEC to generate feedback controls for systems of more than one thousand coupled, non-linear ODEs that represent epidemic processes and coupled oscillators.Comment: Fifth version improves and clears notatio

    Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review

    Get PDF
    The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage

    The Information Complexity of Learning Tasks, their Structure and their Distance

    Get PDF
    We introduce an asymmetric distance in the space of learning tasks, and a framework to compute their complexity. These concepts are foundational for the practice of transfer learning, whereby a parametric model is pre-trained for a task, and then fine-tuned for another. The framework we develop is non-asymptotic, captures the finite nature of the training dataset, and allows distinguishing learning from memorization. It encompasses, as special cases, classical notions from Kolmogorov complexity, Shannon, and Fisher Information. However, unlike some of those frameworks, it can be applied to large-scale models and real-world datasets. Our framework is the first to measure complexity in a way that accounts for the effect of the optimization scheme, which is critical in Deep Learning

    Learning in Artificial Neural Systems

    Get PDF
    This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined
    • …
    corecore