1 research outputs found

    Modeling olfactory processing and insights on optimal learning in constrained neural networks: learning from the anatomy of the Drosophila mushroom body.

    Get PDF
    Animals adapt their systems to optimise for different competing goals at the same time. Ideally, they will reach an optimal state of equilibrium where the outcome from any goal cannot get better without at the same time making another worse off, similar to the state of Pareto optimaility (Mock 2011). Animals can seek different goals like, to maintain their systems’ stability and robustness, or improving their performances in a given computational task, which is reflected in their memory capacity and ability to make more rewarding decisions. Many species are capable of forming associative memories, they can learn to contextualise sensory stimuli as good, bad or neutral, when they are associated by a shortly upcoming salient outcome and bias their behaviours to approach or avoid these cues in the future. In this work I will focus on modelling the associative learning in the mushroom body circuit of the fruit fly, its center of olfactory associative learning. Flies can learn to associate an odor (sensory experience) with an appetitive or aversive outcome. They do so by modifying the connections between the mushroom body intrinsic neurons, called Kenyon cells (KCs), and their downstream mushroom body output neurons (MBONs). The fly motor behaviour was found to be biased by the activity of the MBONs to either approach or avoid an odor (Aso et al. 2014). Although many studies uncovered the molecular mechanisms and the neurons underpinning associative learning in different species, there has been no work done to answer some specific questions: (a) Why do the neurons in the same circuit within the same animal exhibit variability among each others in their intrinsic properties? It is unknown how variability among the same types of neurons in the same circuit and animal would eventually affect the animal’s optimal behaviour in a computational task. Even previous studies that tackled inter-neuronal variability were trying to study its effect on circuits stability and were dealing with inter-neuronal variability across animals and not within an individual circuit (Marder and Goaillard 2006; Golowasch et al. 2002; Schulz, Goaillard, and Marder 2006; Schulz, Goaillard, and Marder 2007). Can the observed inter-neuronal variability be a result of some optimisation protocol that enhances the circuit computational performance, for example, memory or data performance? Or has it just happened at random? (b) Learning in the cerebellum (and its alike structures in other animals like the fruit fly mushroom body) happen by long term depression (weakening) between its intrinsic neurons -encoding the sensory input- and the downstream neurons that guide the animal’s motor behaviour (Ito 1989). Like in (a), I ask if this learning rule has been conserved across species for optimising some computational aspects of learning In this 3 Chapters thesis, I will present a computational model of associative learning in the fruit fly mushroom body using realistic input odors statistics, as well as putting some constraints on the model network that were observed experimentally in the real mushroom body (e.g. the level of KCs sparse coding, the level of KCs sparse coding when their inhibitory inputs are silenced). In Chapter 2, I will answer the first question, the first aim, of this thesis and show that random variability between the KCs in their intrinsic parameters will impair the fly’s memory performance. I find that the random inter-KCs variability will result in a high variability among the neurons in their sparsity values, which results in very few neurons being specifically active for some odors whilst the vast majority are activated by all incoming odors, that reduces the fly’s ability to distinguish between odors and their identity as ’good’ rewarded or ’bad’ punished odors. However, I show that compensatory variability mechanisms will rescue the memory performance. I present 4 different models (activity-independent and activity-dependent rules) for how this compensatory variability can take place in real neurons. Last but not least, I show that the data from the newly released fly connectome actually reveal compensatory variability in the KCs which agree with my models’ predictions. In Chapter 3, I will answer the second question in this thesis and show that, under some conditions, learning by depression can be more optimal than by potentiation. I will show that if the fly’s decision making policy integrates the information from the MBONs in a divisive normalisation like manner (I explain more about divisive normalisation in Chapter 3), then learning by depression will lead to a higher memory performance. I also suggest a biologically plausible implementation for this normalisation decision policy using a winner-take-all (WTA) circuit model. I predict that in a WTA circuit that integrates the MBONs outputs, the fly’s memory performance will be higher under learning by depression than under potentiation if the noise in the MBONs responses is of multiplicative nature (that is, if the noise in the MBONs responses across different trials is higher at higher MBONs firing rates)
    corecore