27 research outputs found

    Generating functionals for computational intelligence: the Fisher information as an objective function for self-limiting Hebbian learning rules

    Get PDF
    Generating functionals may guide the evolution of a dynamical system and constitute a possible route for handling the complexity of neural networks as relevant for computational intelligence. We propose and explore a new objective function, which allows to obtain plasticity rules for the afferent synaptic weights. The adaption rules are Hebbian, self-limiting, and result from the minimization of the Fisher information with respect to the synaptic flux. We perform a series of simulations examining the behavior of the new learning rules in various circumstances. The vector of synaptic weights aligns with the principal direction of input activities, whenever one is present. A linear discrimination is performed when there are two or more principal directions; directions having bimodal firing-rate distributions, being characterized by a negative excess kurtosis, are preferred. We find robust performance and full homeostatic adaption of the synaptic weights results as a by-product of the synaptic flux minimization. This self-limiting behavior allows for stable online learning for arbitrary durations. The neuron acquires new information when the statistics of input activities is changed at a certain point of the simulation, showing however, a distinct resilience to unlearn previously acquired knowledge. Learning is fast when starting with randomly drawn synaptic weights and substantially slower when the synaptic weights are already fully adapted

    Complementary approaches to synaptic plasticity : from objective functions to Biophysics

    Get PDF
    Different approaches are possible when it comes to modeling the brain. Given its biological nature, models can be constructed out of the chemical and biological building blocks known to be at play in the brain, formulating a given mechanism in terms of the basic interactions underlying it. On the other hand, the functions of the brain can be described in a more general or macroscopic way, in terms of desirable goals. This goals may include reducing metabolic costs, being stable or robust, or being efficient in computational terms. Synaptic plasticity, that is, the study of how the connections between neurons evolve in time, is no exception to this. In the following work we formulate (and study the properties of) synaptic plasticity models, employing two complementary approaches: a top-down approach, deriving a learning rule from a guiding principle for rate-encoding neurons, and a bottom-up approach, where a simple yet biophysical rule for time-dependent plasticity is constructed. We begin this thesis with a general overview, in Chapter 1, of the properties of neurons and their connections, clarifying notations and the jargon of the field. These will be our building blocks and will also determine the constrains we need to respect when formulating our models. We will discuss the present challenges of computational neuroscience, as well as the role of physicists in this line of research. In Chapters 2 and 3, we develop and study a local online Hebbian self-limiting synaptic plasticity rule, employing the mentioned top-down approach. Firstly, in Chapter 2 we formulate the stationarity principle of statistical learning, in terms of the Fisher information of the output probability distribution with respect to the synaptic weights. To ensure that the learning rules are formulated in terms of information locally available to a synapse, we employ the local synapse extension to the one dimensional Fisher information. Once the objective function has been defined, we derive an online synaptic plasticity rule via stochastic gradient descent. In order to test the computational capabilities of a neuron evolving according to this rule (combined with a preexisting intrinsic plasticity rule), we perform a series of numerical experiments, training the neuron with different input distributions. We observe that, for input distributions closely resembling a multivariate normal distribution, the neuron robustly selects the first principal component of the distribution, showing otherwise a strong preference for directions of large negative excess kurtosis. In Chapter 3 we study the robustness of the learning rule derived in Chapter 2 with respect to variations in the neural model’s transfer function. In particular, we find an equivalent cubic form of the rule which, given its functional simplicity, permits to analytically compute the attractors (stationary solutions) of the learning procedure, as a function of the statistical moments of the input distribution. In this way, we manage to explain the numerical findings of Chapter 2 analytically, and formulate a prediction: if the neuron is selective to non-Gaussian input directions, it should be suitable for applications to independent component analysis. We close this section by showing how indeed, a neuron operating under these rules can learn the independent components in the non-linear bars problem. A simple biophysical model for time-dependent plasticity (STDP) is developed in Chapter 4. The model is formulated in terms of two decaying traces present in the synapse, namely the fraction of activated NMDA receptors and the calcium concentration, which serve as clocks, measuring the time of pre- and postsynaptic spikes. While constructed in terms of the key biological elements thought to be involved in the process, we have kept the functional dependencies of the variables as simple as possible to allow for analytic tractability. Despite its simplicity, the model is able to reproduce several experimental results, including the typical pairwise STDP curve and triplet results, in both hippocampal culture and layer 2/3 cortical neurons. Thanks to the model’s functional simplicity, we are able to compute these results analytically, establishing a direct and transparent connection between the model’s internal parameters and the qualitative features of the results. Finally, in order to make a connection to synaptic plasticity for rate encoding neural models, we train the synapse with Poisson uncorrelated pre- and postsynaptic spike trains and compute the expected synaptic weight change as a function of the frequencies of these spike trains. Interestingly, a Hebbian (in the rate encoding sense of the word) BCM-like behavior is recovered in this setup for hippocampal neurons, while dominating depression seems unavoidable for parameter configurations reproducing experimentally observed triplet nonlinearities in layer 2/3 cortical neurons. Potentiation can however be recovered in these neurons when correlations between pre- and postsynaptic spikes are present. We end this chapter by discussing the relation to existing experimental results, leaving open questions and predictions for future experiments. A set of summary cards of the models employed, together with listings of the relevant variables and parameters, are presented at the end of the thesis, for easier access and permanent reference for the reader

    E-I balance emerges naturally from continuous Hebbian learning in autonomous neural networks.

    Get PDF
    Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron's input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered self-limiting Hebbian synaptic plasticity rule is continuously active

    Addressing fairness in artificial intelligence for medical imaging

    Get PDF
    A plethora of work has shown that AI systems can systematically and unfairly be biased against certain populations in multiple scenarios. The field of medical imaging, where AI systems are beginning to be increasingly adopted, is no exception. Here we discuss the meaning of fairness in this area and comment on the potential sources of biases, as well as the strategies available to mitigate them. Finally, we analyze the current state of the field, identifying strengths and highlighting areas of vacancy, challenges and opportunities that lie ahead.Fil: Ricci Lara, María Agustina. Universidad Tecnológica Nacional; Argentina. Hospital Italiano. Departamento de Informática En Salud.; ArgentinaFil: Echeveste, Rodrigo Sebastián. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Ferrante, Enzo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentin

    Bridging physiological and perceptual views of autism by means of sampling-based Bayesian inference

    Get PDF
    Theories for autism spectrum disorder (ASD) have been formulated at different levels, ranging from physiological observations to perceptual and behavioral descriptions. Understanding the physiological underpinnings of perceptual traits in ASD remains a significant challenge in the field. Here we show how a recurrent neural circuit model that was optimized to perform sampling-based inference and displays characteristic features of cortical dynamics can help bridge this gap. The model was able to establish a mechanistic link between two descriptive levels for ASD: a physiological level, in terms of inhibitory dysfunction, neural variability, and oscillations, and a perceptual level, in terms of hypopriors in Bayesian computations. We took two parallel paths—inducing hypopriors in the probabilistic model, and an inhibitory dysfunction in the network model—which lead to consistent results in terms of the represented posteriors, providing support for the view that both descriptions might constitute two sides of the same coin.Fil: Echeveste, Rodrigo Sebastián. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Ferrante, Enzo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Milone, Diego Humberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Samengo, Ines. Comisión Nacional de Energía Atómica. Gerencia del Área de Energía Nuclear. Instituto Balseiro; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    Unsupervised bias discovery in medical image segmentation

    Full text link
    It has recently been shown that deep learning models for anatomical segmentation in medical images can exhibit biases against certain sub-populations defined in terms of protected attributes like sex or ethnicity. In this context, auditing fairness of deep segmentation models becomes crucial. However, such audit process generally requires access to ground-truth segmentation masks for the target population, which may not always be available, especially when going from development to deployment. Here we propose a new method to anticipate model biases in biomedical image segmentation in the absence of ground-truth annotations. Our unsupervised bias discovery method leverages the reverse classification accuracy framework to estimate segmentation quality. Through numerical experiments in synthetic and realistic scenarios we show how our method is able to successfully anticipate fairness issues in the absence of ground-truth labels, constituting a novel and valuable tool in this field.Comment: Accepted for publication at FAIMI 2023 (Fairness of AI in Medical Imaging) at MICCA

    Sesgos en problemas de regresión originados por el desbalance de datos en términos de atributos protegidos

    Get PDF
    En este trabajo se busca estudiar el efecto sobre el desempeño de modelos de regresión provocado por el desbalance en los datos en términos de atributos protegidos durante el entrenamiento. Estos atributos, como género o color de piel de una persona, son características propias de los datos que pueden o no tener una relación directa con el problema a resolver. Los resultados obtenidos mediante experimentos tanto sobre datos sintéticos como reales, muestran que el error sobre una dada población aumenta cuando se encuentra subrepresentada en el conjunto de datos de entrenamiento. En ambos casos estudiados encontramos que el error sobre la población completa fue mínimo cuando se encontraba balanceado en términos del atributo protegido en cuestión. Este estudio es el primer paso de un trabajo que busca extender este análisis a otras bases de datos, modelos y problemas, para luego atenuar este inconveniente incorporando penalizadores que desincentiven un mejor rendimiento sobre un subconjunto en desmedro de otro.Sociedad Argentina de Informática e Investigación Operativ

    Desbalance de datos en términos de atributos protegidos: análisis de su impacto en un clasificador lineal

    Get PDF
    En este trabajo se busca estudiar el impacto del desbalance en los datos utilizados para entrenar un clasificador lineal, centrando el análisis en atributos protegidos. Dichos atributos, tales como género, grupo étnico o edad, no constituyen la clase objetivo del clasificador, sino que corresponden a. características demográficas que pueden ser o no parte del problema a resolver. Los resultados obtenidos mediante experimentos sintéticos simples muestran que la exactitud sobre una población dada se deteriora cuando se encuentra subrepresentada en el conjunto de datos de entrenamiento. En todos los casos, el rendimiento del clasificador sobre la población completa es máximo cuando este conjunto de datos se encuentra balanceado en lo que respecta, a. atributos protegidos. Estas conclusiones son el primer paso de un trabajo que busca mostrar cómo puede atenuarse este inconveniente incorporando penalizantes que desincentiven un aumento de la exactitud sobre un subconjunto de la población en desmedro de otra.Sociedad Argentina de Informátic
    corecore