7 research outputs found

    Bio-inspired analysis of deep learning on not-so-big data using data-prototypes

    Get PDF
    International audienceDeep artificial neural networks are feed-forward architectures capable of very impressive performances in diverse domains. Indeed stacking multiple layers allows a hierarchical composition of local functions, providing efficient compact mappings. Compared to the brain, however, such architectures are closer to a single pipeline and require huge amounts of data, while concrete cases for either human or machine learning systems are often restricted to not-so-big data sets.Furthermore, interpretability of the obtained results is a key issue: since deep learning applications are increasingly present in society,it is important that the underlying processes be accessible and understandable to every one.In order to target these challenges, in this contribution we analyze how considering prototypes in a rather generalized sense (with respect to the state of the art)allows to reasonably work with small data sets while providing an interpretable view of the obtained results.Some mathematical interpretation of this proposal is discussed.Sensitivity to hyperparameters is a key issue for reproducible deep learning results, and is carefully considered in our methodology.Performances and limitations of the proposed setup are explored in details, under different hyperparameters sets, in an analogous way as biological experiments are conducted.We obtain a rather simple architecture, easy to explain, and which allows, combined with a standard method, to target both performances and interpretability

    From computational neuroscience to computational learning science: modeling the brain of the learner and the context of the learning activity

    Get PDF
    International audienceWe share a new exploratory action known as Artificial Intelligence Devoted to Education (AIDE) launched with the support of Inria (Mnemosyne Team) and Nice INSPÉ from Côte d´Azur University (LINE laboratory) in connection with the Bordeaux NeuroCampus. It positions artificial intelligence in a somewhat original way ... not [only] as a disruptive tool, but as a formalism allowing to model learning human in problem-solving activities

    Development of Deep Learning based Intelligent Approach for Credit Card Fraud Detection

    Get PDF
    Credit card fraud (CCF) has long been a major concern of institutions of financial groups and business partners, and it is also a global interest to researchers due to its growing popularity. In order to predict and detect the CCF, machine learning (ML) has proven to be one of the most promising techniques. But, class inequality is one of the main and recurring challenges when dealing with CCF tasks that hinder model performance. To overcome this challenges, a Deep Learning (DL) techniques are used by the researchers. In this research work, an efficient CCF detection (CCFD) system is developed by proposing a hybrid model called Convolutional Neural Network with Recurrent Neural Network (CNN-RNN). In this model, CNN acts as feature extraction for extracting the valuable information of CCF data and long-term dependency features are studied by RNN model. An imbalance problem is solved by Synthetic Minority Over Sampling Technique (SMOTE) technique. An experiment is conducted on European Dataset to validate the performance of CNN-RNN model with existing CNN and RNN model in terms of major parameters. The results proved that CNN-RNN model achieved 95.83% of precision, where CNN achieved 93.63% of precision and RNN achieved 88.50% of precision

    Creativity explained by Computational Cognitive Neuroscience

    Get PDF
    International audienceRecently, models in Computational Cognitive Neuro-science (CCN) have gained a renewed interest because they could help analyze current limitations in Artificial Intelligence (AI) and propose operational ways to address them. These limitations are related to difficulties in giving a semantic grounding to manipulated concepts , in coping with high dimensionality and in managing uncertainty. In this paper, we describe the main principles and mechanisms of these models and explain that they can be directly transferred to Computational Creativity (CC), to propose operational mechanisms but also a better understanding of what creativity is

    Approximate and Situated Causality in Deep Learning

    Get PDF
    Altres ajuts: ICREA Academia 2019, and "AppPhil: Applied Philosophy for the Value-Design of Social Networks Apps" project, funded by Caixabank in Recercaixa2017.Causality is the most important topic in the history of western science, and since the beginning of the statistical paradigm, its meaning has been reconceptualized many times. Causality entered into the realm of multi-causal and statistical scenarios some centuries ago. Despite widespread critics, today deep learning and machine learning advances are not weakening causality but are creating a new way of finding correlations between indirect factors. This process makes it possible for us to talk about approximate causality, as well as about a situated causality

    A global framework for a systemic view of brain modeling

    Get PDF
    International audienceThe brain is a complex system, due to the heterogeneity of its structure, the diversity of the functions in which it participates and to its reciprocal relationships with the body and the environment. A systemic description of the brain is presented here, as a contribution to developing a brain theory and as a general framework where specific models in computational neuroscience can be integrated and associated with global information flows and cognitive functions. In an enactive view, this framework integrates the fundamental organization of the brain in sensorimotor loops with the internal and the external worlds, answering four fundamental questions (what, why, where and how). Our survival-oriented definition of behavior gives a prominent role to pavlovian and instrumental conditioning, augmented during phylogeny by the specific contribution of other kinds of learning, related to semantic memory in the posterior cortex, episodic memory in the hippocampus and working memory in the frontal cortex. This framework highlights that responses can be prepared in different ways, from pavlovian reflexes and habitual behavior to deliberations for goal-directed planning and reasoning, and explains that these different kinds of responses coexist, collaborate and compete for the control of behavior. It also lays emphasis on the fact that cognition can be described as a dynamical system of interacting memories, some acting to provide information to others, to replace them when they are not efficient enough, or to help for their improvement. Describing the brain as an architecture of learning systems has also strong implications in Machine Learning. Our biologically informed view of pavlovian and instrumental conditioning can be very precious to revisit classical Reinforcement Learning and provide a basis to ensure really autonomous learning
    corecore