3 research outputs found
Reconstruction of Virtual Neural Circuits in an Insect Brain
The reconstruction of large-scale nervous systems represents a major scientific and engineering challenge in current neuroscience research that needs to be resolved in order to understand the emergent properties of such systems. We focus on insect nervous systems because they represent a good compromise between architectural simplicity and the ability to generate a rich behavioral repertoire. In insects, several sensory maps have been reconstructed so far. We provide an overview over this work including our reconstruction of population activity in the primary olfactory network, the antennal lobe. Our reconstruction approach, that also provides functional connectivity data, will be refined and extended to allow the building of larger scale neural circuits up to entire insect brains, from sensory input to motor output
匂い源探索における状態依存的な複数感覚統合に関する研究
学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 神崎 亮平, 東京大学教授 下山 勲, 東京大学教授 竹内 昌治, 東京大学特任講師 安藤 規泰, 総合研究大学院大学講師 木下 充代University of Tokyo(東京大学
Recommended from our members
Biomimetic models of visual navigation - active sensing for embodied intelligence
Insects have developed small scale search behaviours to pursue navigation relevant stimuli more effectively. These often resemble a variation of Zig-Zagging, steering periodically to the left and right, therefore increasing the sampling. In this context we investigate the role of a homologous insect brain structure, the Lateral Accessory Lobe (LAL), which has been described as a pre-motor centre but received limited attention so far. Following a synthesis of the literature on the LAL we developed a steering framework, which proposes that with lateralised stimuli as input, the LAL can initiate a Zig-Zagging behaviour if the input is too weak, meaning unreliable, and targeted steering behaviours if the input is strong, thus reliable. Based on this framework we model a Spiking Neural Network (SNN) investigating a sensory modulated Central Pattern Generator (CPG) as a possible neural mechanism enabling adaptive search behaviours. We investigated the parameter space of the model to discover both the range of possible behaviours as well as which parameter combinations lead to the previously described behaviour. We found that no parameter combination accounts for the majority of observed behaviours. Furthermore, changing the computational noise levels does not lead to break-down of this behaviour. We conclude, that this neural architecture is robust to generate an adaptable Zig-Zagging behaviour. Additionally, we developed a more comprehensive network to explore the functions of known neuron-types with regard to motor control. To investigate how this steering framework might work for view based navigation, we investigated how lateralised sensory input can be used for snapshot navigation. We used a 3D-reconstruction from a LiDAR-scanned field-site (“Antworld”) to generate realistic visual stimuli. Instead of using the entire panorama, we subdivided this into two Fields of View for snapshot generation and the later image comparisons. The difference of image familiarity from both sides was subtracted to initiate a steering response into the most familiar direction. We found that a bigger Field of View alongside non-forward facing memories generated the most correct steering responses towards the snapshot direction. This demonstrates that the LAL-inspired steering framework can be functional for a complex sensori-motor task that had previously not been implicated in LAL functionality. Finally, we modelled how bilateral sensory information and a SNN model of the LAL behave in a snapshot navigation setup using Antworld. We compared the original snapshot navigation model using a panoramic Field of View with several combinations of the Core-Network and bilateral vision models: using a bilateral view, a bilateral view with the SNN, a panoramic view with SNN and other standard movement behaviours. We confirmed the findings of preliminary work, in an abstract setup, that had shown that a bilateral view combined with a SNN performs best to recover and approach navigation relevant locations. Also introducing models based on the steering framework into this visually complex environment improved the performance of agents performing snapshot navigation