34,243 research outputs found

    Automatic differentiation in machine learning: a survey

    Get PDF
    Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure

    Connectionist Inference Models

    Get PDF
    The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling

    Dimensions of Neural-symbolic Integration - A Structured Survey

    Full text link
    Research on integrated neural-symbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to strive for applicable implementations and use cases. Recent work has covered a great variety of logics used in artificial intelligence and provides a multitude of techniques for dealing with them within the context of artificial neural networks. We present a comprehensive survey of the field of neural-symbolic integration, including a new classification of system according to their architectures and abilities.Comment: 28 page

    Different Approaches to Proof Systems

    Get PDF
    The classical approach to proof complexity perceives proof systems as deterministic, uniform, surjective, polynomial-time computable functions that map strings to (propositional) tautologies. This approach has been intensively studied since the late 70’s and a lot of progress has been made. During the last years research was started investigating alternative notions of proof systems. There are interesting results stemming from dropping the uniformity requirement, allowing oracle access, using quantum computations, or employing probabilism. These lead to different notions of proof systems for which we survey recent results in this paper

    Mechanisms for the generation and regulation of sequential behaviour

    Get PDF
    A critical aspect of much human behaviour is the generation and regulation of sequential activities. Such behaviour is seen in both naturalistic settings such as routine action and language production and laboratory tasks such as serial recall and many reaction time experiments. There are a variety of computational mechanisms that may support the generation and regulation of sequential behaviours, ranging from those underlying Turing machines to those employed by recurrent connectionist networks. This paper surveys a range of such mechanisms, together with a range of empirical phenomena related to human sequential behaviour. It is argued that the empirical phenomena pose difficulties for most sequencing mechanisms, but that converging evidence from behavioural flexibility, error data arising from when the system is stressed or when it is damaged following brain injury, and between-trial effects in reaction time tasks, point to a hybrid symbolic activation-based mechanism for the generation and regulation of sequential behaviour. Some implications of this view for the nature of mental computation are highlighted
    • …
    corecore