4 research outputs found

    Parallel scalable simulations of biological neural networks using TensorFlow: A beginner's guide

    No full text
    Biological neural networks are often modeled as systems of coupled, nonlinear, ordinary or partial differential equations. The number of differential equations used to model a network increases with the size of the network and the level of detail used to model individual neurons and synapses. As one scales up the size of the simulation, it becomes essential to utilize powerful computing platforms. While many tools exist that solve these equations numerically, they are often platform-specific. Further, there is a high barrier of entry to developing flexible platform-independent general-purpose code that supports hardware acceleration on modern computing architectures such as GPUs/TPUs and Distributed Platforms. TensorFlow is a Python-based open-source package designed for machine learning algorithms. However, it is also a scalable environment for a variety of computations, including solving differential equations using iterative algorithms such as Runge-Kutta methods. In this article and the accompanying tutorials, we present a simple exposition of numerical methods to solve ordinary differential equations using Python and TensorFlow. The tutorials consist of a series of Python notebooks that, over the course of five sessions, will lead novice programmers from writing programs to integrate simple one-dimensional ordinary differential equations using Python to solving a large system (1000's of differential equations) of coupled conductance-based neurons using a highly parallelized and scalable framework. Embedded with the tutorial is a physiologically realistic implementation of a network in the insect olfactory system. This system, consisting of multiple neuron and synapse types, can serve as a template to simulate other networks.</p

    Deciphering value learning rules underlying in fruit-flies using a model-driven approach

    No full text
    Navigating the world requires an animal to make choices in a dynamic and uncertain world. Therefore, animals can benefit by adapting their behavior to past experiences, but the exact nature of the computations performed and their neural implementations are currently unclear. Extensive prior knowledge about fruit flies (D. melanogaster) provides a unique opportunity to explore the mechanistic basis of cognitive factors underlying decision-making. However, to do this, we require a large number of choice trajectories from single flies. We, therefore, expand and calibrate a Y-maze olfactory choice assay to run 16 flies in parallel to allow us to build and test better models using behavioral perturbation methods such as choice engineering. We take two complementary approaches to explore various learning rules that the fly may use - a model-fitting approach and a novel de-novo learning rule synthesis approach. Firstly, we fit increasingly complex reinforcement learning rules to explain behavior. We find that approximating perseverance/habits better explains and predicts individual choice outcomes. Next, we develop a flexible framework using small neural networks to infer learning rules and predict choices. We find that small neural networks with less than < 5 neurons trained to estimate odor values can accurately predict decisions across flies better than the best reinforcement learning models. We analyze the behavior of these networks to reveal underlying dynamics that reiterate the presence of perseverance behavior. We successfully reproduce most of our observations across different behavioral setups. Our results suggest that habit-forming tendencies beyond naive reward-seeking may influence flies’ choices.</p

    Invariant neural representations of fluctuating odor inputs

    No full text
    Steady odor streams are typically encoded as robust spatiotemporal spike trains by olfactory networks. This suggests a one-to-one mapping between the stimulus (an odor mixed in a steady stream of air) and its representation (a spatiotemporal pattern of spikes in a population of neurons) in the brain. Such a one-to-one mapping between an odor and a spatiotemporal pattern is unlikely to be accurate since natural odor stimuli change unpredictably over time. Odors arrive riding upon chaotically pulsed plumes of air and show unpredictable variations in concentration and in the composition of odorant molecules. These temporal changes often vary over time scales that are similar to the time scales of neural events thought to play a role in odor recognition. In the absence of such temporal variations, animals are known to inject intermittency while sampling the odor, suggesting that intermittent inputs might be a ‘feature’, not a ‘bug’. Here, we attempt to find the neural invariants of stable olfactory percepts using a computational model of the locust antennal lobe, the insect equivalent of the olfactory bulb in mammals. We show that when time-varying odor inputs intermittently perturb subsets of neurons in the antennal lobe network, the activity of the network reverberates in a manner that depends on both the nature of the inputs it receives and the structure of the neuronal sub-network that these inputs stimulate. We demonstrate that it is possible to decipher the structure of the perturbed sub-network by examining transient synchrony in the activity of the neurons. The ability to reconstruct the sub-network structure is vastly improved when odor inputs arrive or are sampled in an intermittent manner. Thus, the structure of the stimulated sub-network itself serves as a unique invariant code that represents the odor. Recent studies have shown that the response of individual projection neurons in the antennal lobe to a particular odor can be approximated using an odor-specific response kernel convolved with the temporal profile of the odor input. The parameters defining this kernel remain invariant to temporal changes in the input profile. Our simulations show that this invariance is inherited from the network structure.</p

    Enumerating and Discovering Highly Discriminative Tasks for Probing Diverse Foraging Strategies

    No full text
    Foraging, an indispensable behavior for survival, consists of long sequences of searches, encounters, and decisions. To forage successfully, animals are thought to leverage the statistical regularities and dynamical rules of their habitats to maximize long-term utility. Since animals encounter different habitats that demand different decision rules, it is important to infer the behavioral strategies that are most relevant to a particular species. One approach is to observe an animal in its natural habitat, but this comes with technical and conceptual challenges of recording and manipulating behavior in naturalistic settings. To circumvent this without compromising the richness of environmental features that evoke an animal’s foraging strategy, we sought to manipulate a vast set of such features in a controlled lab setting. To this end, we designed a two-choice foraging task with complex contingencies in reward delivery (controlled by up to 13 past decisions). By enumerating different reward-delivery rules, we simulated half a million different task conditions—each resembling a slightly different environment and varying in their putative relevance to the animal. As a proof of principle, we tested these tasks on fruit flies. We selected tasks that best discriminate between two classes of strategies: one that requires “one-shot memory” (like Boolean logic), and one that does not. Even though flies’ decisions are highly stochastic from trial to trial, we identify flies that use one-shot memory to perform well, a finding that cannot be explained by rare sampling events from any “memoryless” strategies. This finding suggests that flies can exploit short-timescale decision rules in a manner that differs from longer-timescale adaptive behaviors mediated by synaptic plasticity. Our framework is agnostic to specific model systems and can flexibly perform inferences in different hypothesized strategy spaces. This will allow us to compare foraging strategies across species and studying their dependence on underlying environmental structure.</p
    corecore