604 research outputs found
Incremental learning algorithms and applications
International audienceIncremental learning refers to learning from streaming data, which arrive over time, with limited memory resources and, ideally, without sacrificing model accuracy. This setting fits different application scenarios where lifelong learning is relevant, e.g. due to changing environments , and it offers an elegant scheme for big data processing by means of its sequential treatment. In this contribution, we formalise the concept of incremental learning, we discuss particular challenges which arise in this setting, and we give an overview about popular approaches, its theoretical foundations, and applications which emerged in the last years
A Defense of Pure Connectionism
Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production.
Consonant with much previous philosophical work on connectionism, I argue that a core principleâthat proximal representations in a vector space have similar semantic valuesâis the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena. My work here differs from preceding work in philosophy in several respects: (1) I compare a wide variety of connectionist responses to the systematicity challenge and isolate two main strands that are both historically important and reflected in ongoing work today: (a) vector symbolic architectures and (b) (compositional) vector space semantic models; (2) I consider very recent applications of these approaches, including their deployment on large-scale machine learning tasks such as machine translation; (3) I argue, again on the basis mostly of recent developments, for a continuity in representation and processing across natural language, image processing and other domains; (4) I explicitly link broad, abstract features of connectionist representation to recent proposals in cognitive science similar in spirit, such as hierarchical Bayesian and free energy minimization approaches, and offer a single rebuttal of criticisms of these related paradigms; (5) I critique recent alternative proposals that argue for a hybrid Classical (i.e. serial symbolic)/statistical model of mind; (6) I argue that defending the most plausible form of a connectionist cognitive architecture requires rethinking certain distinctions that have figured prominently in the history of the philosophy of mind and language, such as that between word- and phrase-level semantic content, and between inference and association
Neurocognitive Informatics Manifesto.
Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given
On the role of pre and post-processing in environmental data mining
The quality of discovered knowledge is highly depending on data quality. Unfortunately real data use to contain noise, uncertainty, errors, redundancies or even irrelevant information. The more complex is the reality to be analyzed, the higher the risk of getting low quality data. Knowledge Discovery from Databases (KDD) offers a global framework to prepare data in the right form to perform correct analyses. On the other hand, the quality of decisions taken upon KDD results, depend not only on the quality of the results themselves, but on the capacity of the system to communicate those results in an understandable form. Environmental systems are particularly complex and environmental users particularly require clarity in their results. In this paper some details about how this can be achieved are provided. The role of the pre and post processing in the whole process of Knowledge Discovery in environmental systems is discussed
Towards Comprehensive Foundations of Computational Intelligence
Abstract. Although computational intelligence (CI) covers a vast variety of different methods it still lacks an integrative theory. Several proposals for CI foundations are discussed: computing and cognition as compression, meta-learning as search in the space of data models, (dis)similarity based methods providing a framework for such meta-learning, and a more general approach based on chains of transformations. Many useful transformations that extract information from features are discussed. Heterogeneous adaptive systems are presented as particular example of transformation-based systems, and the goal of learning is redefined to facilitate creation of simpler data models. The need to understand data structures leads to techniques for logical and prototype-based rule extraction, and to generation of multiple alternative models, while the need to increase predictive power of adaptive models leads to committees of competent models. Learning from partial observations is a natural extension towards reasoning based on perceptions, and an approach to intuitive solving of such problems is presented. Throughout the paper neurocognitive inspirations are frequently used and are especially important in modeling of the higher cognitive functions. Promising directions such as liquid and laminar computing are identified and many open problems presented.
Recommended from our members
A study of instance-based algorithms for supervised learning tasks : mathematical, empirical, and psychological evaluations
This dissertation introduces a framework for specifying instance-based algorithms that can solve supervised learning tasks. These algorithms input a sequence of instances and yield a partial concept description, which is represented by a set of stored instances and associated information. This description can be used to predict values for subsequently presented instances. The thesis of this framework is that extensional concept descriptions and lazy generalization strategies can support efficient supervised learning behavior.The instance-based learning framework consists of three components. The pre-processor component transforms an instance into a more palatable form for the performance component, which computes the instance's similarity with a set of stored instances and yields a prediction for its target value(s). Therefore, the similarity and prediction functions impose generalizations on the stored instances to inductively derive predictions. The learning component assesses the accuracy of these prediction(s) and updates partial concept descriptions to improve their predictive accuracy.This framework is evaluated in four ways. First, its generality is evaluated by mathematically determining the classes of symbolic concepts and numeric functions that can be closely approximated by IB_1, a simple algorithm specified by this framework. Second, this framework is empirically evaluated for its ability to specify algorithms that improve IB_1's learning efficiency. Significant efficiency improvements are obtained by instance-based algorithms that reduce storage requirements, tolerate noisy data, and learn domain-specific similarity functions respectively. Alternative component definitions for these algorithms are empirically analyzed in a set of five high-level parameter studies. Third, this framework is evaluated for its ability to specify psychologically plausible process models for categorization tasks. Results from subject experiments indicate a positive correlation between a models' ability to utilize attribute correlation information and its ability to explain psychological phenomena. Finally, this framework is evaluated for its ability to explain and relate a dozen prominent instance-based learning systems. The survey shows that this framework requires only slight modifications to fit these highly diverse systems. Relationships with edited nearest neighbor algorithms, case-based reasoners, and artificial neural networks are also described
Modelling causality in law = Modélisation de la causalité en droit
L'intĂ©rĂȘt en apprentissage machine pour Ă©tudier la causalitĂ© s'est considĂ©rablement accru ces
derniÚres années. Cette approche est cependant encore peu répandue dans le domaine de
lâintelligence artificielle (IA) et du droit. Elle devrait l'ĂȘtre. L'approche associative actuelle
dâapprentissage machine rĂ©vĂšle certaines limites que l'analyse causale peut surmonter. Cette
thĂšse vise Ă dĂ©couvrir si les modĂšles causaux peuvent ĂȘtre utilisĂ©s en IA et droit.
Nous procédons à une brÚve revue sur le raisonnement et la causalité en science et en droit.
Traditionnellement, les cadres normatifs du raisonnement étaient la logique et la rationalité, mais
la théorie duale démontre que la prise de décision humaine dépend de nombreux facteurs qui
défient la rationalité. à ce titre, des statistiques et des probabilités étaient nécessaires pour
améliorer la prédiction des résultats décisionnels. En droit, les cadres de causalité ont été définis
par des dĂ©cisions historiques, mais la plupart des modĂšles dâaujourdâhui de l'IA et droit
n'impliquent pas d'analyse causale. Nous fournissons un bref résumé de ces modÚles, puis
appliquons le langage structurel de Judea Pearl et les définitions Halpern-Pearl de la causalité
pour modéliser quelques décisions juridiques canadiennes qui impliquent la causalité.
Les résultats suggÚrent qu'il est non seulement possible d'utiliser des modÚles de causalité
formels pour décrire les décisions juridiques, mais également utile car un schéma uniforme
élimine l'ambiguïté. De plus, les cadres de causalité sont utiles pour promouvoir la
responsabilisation et minimiser les biais.The machine learning communityâs interest in causality has significantly increased in recent years.
This trend has not yet been made popular in AI & Law. It should be because the current
associative ML approach reveals certain limitations that causal analysis may overcome. This
research paper aims to discover whether formal causal frameworks can be used in AI & Law.
We proceed with a brief account of scholarship on reasoning and causality in science and in law.
Traditionally, normative frameworks for reasoning have been logic and rationality, but the dual
theory has shown that human decision-making depends on many factors that defy rationality. As
such, statistics and probability were called for to improve the prediction of decisional outcomes. In
law, causal frameworks have been defined by landmark decisions but most of the AI & Law
models today do not involve causal analysis. We provide a brief summary of these models and
then attempt to apply Judea Pearlâs structural language and the Halpern-Pearl definitions of
actual causality to model a few Canadian legal decisions that involve causality.
Results suggest that it is not only possible to use formal causal models to describe legal decisions,
but also useful because a uniform schema eliminates ambiguity. Also, causal frameworks are
helpful in promoting accountability and minimizing biases
- âŠ