3,020 research outputs found
Application of decision trees and multivariate regression trees in design and optimization
Induction of decision trees and regression trees is a powerful technique not only for performing ordinary classification and regression analysis but also for discovering the often complex knowledge which describes the input-output behavior of a learning system in qualitative forms;In the area of classification (discrimination analysis), a new technique called IDea is presented for performing incremental learning with decision trees. It is demonstrated that IDea\u27s incremental learning can greatly reduce the spatial complexity of a given set of training examples. Furthermore, it is shown that this reduction in complexity can also be used as an effective tool for improving the learning efficiency of other types of inductive learners such as standard backpropagation neural networks;In the area of regression analysis, a new methodology for performing multiobjective optimization has been developed. Specifically, we demonstrate that muitiple-objective optimization through induction of multivariate regression trees is a powerful alternative to the conventional vector optimization techniques. Furthermore, in an attempt to investigate the effect of various types of splitting rules on the overall performance of the optimizing system, we present a tree partitioning algorithm which utilizes a number of techniques derived from diverse fields of statistics and fuzzy logic. These include: two multivariate statistical approaches based on dispersion matrices, an information-theoretic measure of covariance complexity which is typically used for obtaining multivariate linear models, two newly-formulated fuzzy splitting rules based on Pearson\u27s parametric and Kendall\u27s nonparametric measures of association, Bellman and Zadeh\u27s fuzzy decision-maximizing approach within an inductive framework, and finally, the multidimensional extension of a widely-used fuzzy entropy measure. The advantages of this new approach to optimization are highlighted by presenting three examples which respectively deal with design of a three-bar truss, a beam, and an electric discharge machining (EDM) process
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques
specifically developed for analyzing and understanding the inner-workings and
representations acquired by neural models of language. Approaches included:
systematic manipulation of input to neural networks and investigating the
impact on their performance, testing whether interpretable knowledge can be
decoded from intermediate representations acquired by neural networks,
proposing modifications to neural network architectures to make their knowledge
state or generated output more explainable, and examining the performance of
networks on simplified or formal languages. Here we review a number of
representative studies in each category
Generating by Understanding: Neural Visual Generation with Logical Symbol Groundings
Despite the great success of neural visual generative models in recent years,
integrating them with strong symbolic knowledge reasoning systems remains a
challenging task. The main challenges are two-fold: one is symbol assignment,
i.e. bonding latent factors of neural visual generators with meaningful symbols
from knowledge reasoning systems. Another is rule learning, i.e. learning new
rules, which govern the generative process of the data, to augment the
knowledge reasoning systems. To deal with these symbol grounding problems, we
propose a neural-symbolic learning approach, Abductive Visual Generation
(AbdGen), for integrating logic programming systems with neural visual
generative models based on the abductive learning framework. To achieve
reliable and efficient symbol assignment, the quantized abduction method is
introduced for generating abduction proposals by the nearest-neighbor lookups
within semantic codebooks. To achieve precise rule learning, the contrastive
meta-abduction method is proposed to eliminate wrong rules with positive cases
and avoid less-informative rules with negative cases simultaneously.
Experimental results on various benchmark datasets show that compared to the
baselines, AbdGen requires significantly fewer instance-level labeling
information for symbol assignment. Furthermore, our approach can effectively
learn underlying logical generative rules from data, which is out of the
capability of existing approaches
Inductive Biases for Deep Learning of Higher-Level Cognition
A fascinating hypothesis is that human and animal intelligence could be
explained by a few principles (rather than an encyclopedic list of heuristics).
If that hypothesis was correct, we could more easily both understand our own
intelligence and build intelligent machines. Just like in physics, the
principles themselves would not be sufficient to predict the behavior of
complex systems like brains, and substantial computation might be needed to
simulate human-like intelligence. This hypothesis would suggest that studying
the kind of inductive biases that humans and animals exploit could help both
clarify these principles and provide inspiration for AI research and
neuroscience theories. Deep learning already exploits several key inductive
biases, and this work considers a larger list, focusing on those which concern
mostly higher-level and sequential conscious processing. The objective of
clarifying these particular principles is that they could potentially help us
build AI systems benefiting from humans' abilities in terms of flexible
out-of-distribution and systematic generalization, which is currently an area
where a large gap exists between state-of-the-art machine learning and human
intelligence.Comment: This document contains a review of authors research as part of the
requirement of AG's predoctoral exam, an overview of the main contributions
of the authors few recent papers (co-authored with several other co-authors)
as well as a vision of proposed future researc
Combinatorial Generalisation in Machine Vision
The human capacity for generalisation, i.e. the fact that we are able to successfully perform a familiar task in novel contexts, is one of the hallmarks of our intelligent behaviour. But what mechanisms enable this capacity that is at the same time so impressive but comes so naturally to us? This is a question that has driven copious amounts of research in both Cognitive Science and Artificial Intelligence for almost a century, with some advocating the need for symbolic systems and others the benefits of distributed representations. In this thesis we will explore which principles help AI systems to generalise to novel combinations of previously observed elements (such as color and shape) in the context of machine vision. We will show that while approaches such as disentangled representation learning showed initial promise, they are fundamentally unable to solve this generalisation problem. In doing so we will illustrate the need to perform severe tests of models in order to properly assess their limitations. We will also see how such failures are robust across different datasets, training modalities and in the internal representations of the models. We then show that a different type of system that attempts to learn object-centric representations is capable of solving the generalisation challenges that previous models could not. We conclude by discussing the implications of these results for long-standing questions regarding the kinds of cognitive systems that are required to solve generalisation problems
Connectionist Inference Models
The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling
From specialists to generalists : inductive biases of deep learning for higher level cognition
Les réseaux de neurones actuels obtiennent des résultats de pointe dans une gamme de domaines problématiques difficiles.
Avec suffisamment de données et de calculs, les réseaux de neurones actuels peuvent obtenir des résultats de niveau humain sur presque toutes les tâches. En ce sens, nous avons pu former des spécialistes capables d'effectuer très bien une tâche particulière, que ce soit le jeu de Go, jouer à des jeux Atari, manipuler le cube Rubik, mettre des légendes sur des images ou dessiner des images avec des légendes. Le prochain défi pour l'IA est de concevoir des méthodes pour former des généralistes qui, lorsqu'ils sont exposés à plusieurs tâches pendant l'entraînement, peuvent s'adapter rapidement à de nouvelles tâches inconnues. Sans aucune hypothèse sur la distribution génératrice de données, il peut ne pas être possible d'obtenir une meilleure généralisation et une meilleure adaptation à de nouvelles tâches (inconnues).
Les réseaux de neurones actuels obtiennent des résultats de pointe dans une gamme de domaines problématiques difficiles.
Une possibilité fascinante est que l'intelligence humaine et animale puisse être expliquée par quelques principes, plutôt qu'une encyclopédie de faits. Si tel était le cas, nous pourrions plus facilement à la fois comprendre notre propre intelligence et construire des machines intelligentes. Tout comme en physique, les principes eux-mêmes ne suffiraient pas à prédire le comportement de systèmes complexes comme le cerveau, et des calculs importants pourraient être nécessaires pour simuler l'intelligence humaine. De plus, nous savons que les vrais cerveaux intègrent des connaissances a priori détaillées spécifiques à une tâche qui ne pourraient pas tenir dans une courte liste de principes simples. Nous pensons donc que cette courte liste explique plutôt la capacité des cerveaux à apprendre et à s'adapter efficacement à de nouveaux environnements, ce qui est une grande partie de ce dont nous avons besoin pour l'IA. Si cette hypothèse de simplicité des principes était correcte, cela suggérerait que l'étude du type de biais inductifs (une autre façon de penser aux principes de conception et aux a priori, dans le cas des systèmes d'apprentissage) que les humains et les animaux exploitent pourrait aider à la fois à clarifier ces principes et à fournir source d'inspiration pour la recherche en IA.
L'apprentissage en profondeur exploite déjà plusieurs biais inductifs clés, et mon travail envisage une liste plus large, en se concentrant sur ceux qui concernent principalement le traitement cognitif de niveau supérieur. Mon travail se concentre sur la conception de tels modèles en y incorporant des hypothèses fortes mais générales (biais inductifs) qui permettent un raisonnement de haut niveau sur la structure du monde. Ce programme de recherche est à la fois ambitieux et pratique, produisant des algorithmes concrets ainsi qu'une vision cohérente pour une recherche à long terme vers la généralisation dans un monde complexe et changeant.Current neural networks achieve state-of-the-art results across a range of challenging problem domains.
Given enough data, and computation, current neural networks can achieve human-level results on mostly any task. In the sense, that we have been able to train \textit{specialists} that can perform a particular task really well whether it's the game of GO, playing Atari games, Rubik's cube manipulation, image caption or drawing images given captions. The next challenge for AI is to devise methods to train \textit{generalists} that when exposed to multiple tasks during training can quickly adapt to new unknown tasks. Without any assumptions about the data generating distribution it may not be possible to achieve better generalization and adaption to new (unknown) tasks.
A fascinating possibility is that human and animal intelligence could be explained by a few principles (rather than an encyclopedia). If that was the case, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behavior of complex systems like brains, and substantial computation might be needed to simulate human intelligence. In addition, we know that real brains incorporate some detailed task-specific a priori knowledge which could not fit in a short list of simple principles. So we think of that short list rather as explaining the ability of brains to learn and adapt efficiently to new environments, which is a great part of what we need for AI. If that simplicity of principles hypothesis was correct it would suggest that studying the kind of inductive biases (another way to think about principles of design and priors, in the case of learning systems) that humans and animals exploit could help both clarify these principles and provide inspiration for AI research.
Deep learning already exploits several key inductive biases, and my work considers a larger list, focusing on those which concern mostly higher-level cognitive processing. My work focuses on designing such models by incorporating in them strong but general assumptions (inductive biases) that enable high-level reasoning about the structure of the world. This research program is both ambitious and practical, yielding concrete algorithms as well as a cohesive vision for long-term research towards generalization in a complex and changing world
- …