406 research outputs found

    On the path integration system of insects: there and back again

    Get PDF
    Navigation is an essential capability of animate organisms and robots. Among animate organisms of particular interest are insects because they are capable of a variety of navigation competencies solving challenging problems with limited resources, thereby providing inspiration for robot navigation. Ants, bees and other insects are able to return to their nest using a navigation strategy known as path integration. During path integration, the animal maintains a running estimate of the distance and direction to its nest as it travels. This estimate, known as the `home vector', enables the animal to return to its nest. Path integration was the technique used by sea navigators to cross the open seas in the past. To perform path integration, both sailors and insects need access to two pieces of information, their direction and their speed of motion over time. Neurons encoding the heading and speed have been found to converge on a highly conserved region of the insect brain, the central complex. It is, therefore, believed that the central complex is key to the computations pertaining to path integration. However, several questions remain about the exact structure of the neuronal circuit that tracks the animal's heading, how it differs between insect species, and how the speed and direction are integrated into a home vector and maintained in memory. In this thesis, I have combined behavioural, anatomical, and physiological data with computational modelling and agent simulations to tackle these questions. Analysis of the internal compass circuit of two insect species with highly divergent ecologies, the fruit fly Drosophila melanogaster and the desert locust Schistocerca gregaria, revealed that despite 400 million years of evolutionary divergence, both species share a fundamentally common internal compass circuit that keeps track of the animal's heading. However, subtle differences in the neuronal morphologies result in distinct circuit dynamics adapted to the ecology of each species, thereby providing insights into how neural circuits evolved to accommodate species-specific behaviours. The fast-moving insects need to update their home vector memory continuously as they move, yet they can remember it for several hours. This conjunction of fast updating and long persistence of the home vector does not directly map to current short, mid, and long-term memory accounts. An extensive literature review revealed a lack of available memory models that could support the home vector memory requirements. A comparison of existing behavioural data with the homing behaviour of simulated robot agents illustrated that the prevalent hypothesis, which posits that the neural substrate of the path integration memory is a bump attractor network, is contradicted by behavioural evidence. An investigation of the type of memory utilised during path integration revealed that cold-induced anaesthesia disrupts the ability of ants to return to their nest, but it does not eliminate their ability to move in the correct homing direction. Using computational modelling and simulated agents, I argue that the best explanation for this phenomenon is not two separate memories differently affected by temperature but a shared memory that encodes both the direction and distance. The results presented in this thesis shed some more light on the labyrinth that researchers of animal navigation have been exploring in their attempts to unravel a few more rounds of Ariadne's thread back to its origin. The findings provide valuable insights into the path integration system of insects and inspiration for future memory research, advancing path integration techniques in robotics, and developing novel neuromorphic solutions to computational problems

    Discovery and characterization of a specific inhibitor of serine-threonine kinase cyclin-dependent kinase-like 5 (CDKL5) demonstrates role in hippocampal CA1 physiology

    Get PDF
    Pathological loss-of-function mutations in cyclin-dependent kinase-like 5 (CDKL5) cause CDKL5 deficiency disorder (CDD), a rare and severe neurodevelopmental disorder associated with severe and medically refractory early-life epilepsy, motor, cognitive, visual, and autonomic disturbances in the absence of any structural brain pathology. Analysis of genetic variants in CDD has indicated that CDKL5 kinase function is central to disease pathology. CDKL5 encodes a serine-threonine kinase with significant homology to GSK3ÎČ, which has also been linked to synaptic function. Further, Cdkl5 knock-out rodents have increased GSK3ÎČ activity and often increased long-term potentiation (LTP). Thus, development of a specific CDKL5 inhibitor must be careful to exclude cross-talk with GSK3ÎČ activity. We synthesized and characterized specific, high-affinity inhibitors of CDKL5 that do not have detectable activity for GSK3ÎČ. These compounds are very soluble in water but blood-brain barrier penetration is low. In rat hippocampal brain slices, acute inhibition of CDKL5 selectively reduces postsynaptic function of AMPA-type glutamate receptors in a dose-dependent manner. Acute inhibition of CDKL5 reduces hippocampal LTP. These studies provide new tools and insights into the role of CDKL5 as a newly appreciated key kinase necessary for synaptic plasticity. Comparisons to rodent knock-out studies suggest that compensatory changes have limited the understanding of the roles of CDKL5 in synaptic physiology, plasticity, and human neuropathology

    Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks

    Full text link
    Recurrent neural networks (RNNs) in the brain and in silico excel at solving tasks with intricate temporal dependencies. Long timescales required for solving such tasks can arise from properties of individual neurons (single-neuron timescale, τ\tau, e.g., membrane time constant in biological neurons) or recurrent interactions among them (network-mediated timescale). However, the contribution of each mechanism for optimally solving memory-dependent tasks remains poorly understood. Here, we train RNNs to solve NN-parity and NN-delayed match-to-sample tasks with increasing memory requirements controlled by NN by simultaneously optimizing recurrent weights and τ\taus. We find that for both tasks RNNs develop longer timescales with increasing NN, but depending on the learning objective, they use different mechanisms. Two distinct curricula define learning objectives: sequential learning of a single-NN (single-head) or simultaneous learning of multiple NNs (multi-head). Single-head networks increase their τ\tau with NN and are able to solve tasks for large NN, but they suffer from catastrophic forgetting. However, multi-head networks, which are explicitly required to hold multiple concurrent memories, keep τ\tau constant and develop longer timescales through recurrent connectivity. Moreover, we show that the multi-head curriculum increases training speed and network stability to ablations and perturbations, and allows RNNs to generalize better to tasks beyond their training regime. This curriculum also significantly improves training GRUs and LSTMs for large-NN tasks. Our results suggest that adapting timescales to task requirements via recurrent interactions allows learning more complex objectives and improves the RNN's performance

    Microcircuit structures of inhibitory connectivity in the rat parahippocampal gyrus

    Get PDF
    Komplexe Berechnungen im Gehirn werden durch das Zusammenspiel von exzitatorischen und hemmenden Neuronen in lokalen Netzwerken ermöglicht. In kortikalen Netzwerken, wird davon ausgegangen, dass hemmende Neurone, besonders Parvalbumin positive Korbzellen, ein „blanket of inhibition” generieren. Dieser Sichtpunkt wurde vor kurzem durch Befunde strukturierter Inhibition infrage gestellt, jedoch ist die Organisation solcher KonnektivitĂ€t noch unklar. In dieser Dissertation, prĂ€sentiere ich die Ergebnisse unserer Studie Parvabumin positiver Korbzellen, in Schichten II / III des entorhinalen Kortexes und PrĂ€subiculums der Ratte. Im entorhinalen Kortex haben wir dorsale und ventrale Korbzellen beschrieben und festgestellt, dass diese morphologisch und physiologisch Ă€hnlich, jedoch in ihrer KonnektivitĂ€t zu Prinzipalzellen dorsal stĂ€rker als ventral verbunden sind. Dieser Unterschied korreliert mit VerĂ€nderungen der Gitterzellenphysiologie. Ähnlich zeige ich im PrĂ€subiculum, dass inhibitorische KonnektivitĂ€t eine essenzielle Rolle im lokalen Netzwerk spielt. Hemmung im PrĂ€subiculum ist deutlich spĂ€rlicher ist als im entorhinalen Kortex, was ein unterschiedliches Prinzip der Netzwerkorganisation suggeriert. Um diesen Unterschied zu studieren, haben wir Morphologie und Netzwerkeigenschaften PrĂ€subiculĂ€rer Korbzellen analysiert. Prinzipalzellen werden ĂŒber ein vorherrschendes reziprokes Motif gehemmt die durch die polarisierte Struktur der Korbzellaxone ermöglicht wird. Unsere Netzwerksimulationen zeigen, dass eine polarisierte Inhibition Kopfrichtungs-Tuning verbessert. Insgesamt zeigen diese Ergebnisse, dass inhibitorische KonnektivitĂ€t, funktioneller Anforderungen der lokalen Netzwerke zur Folge, unterschiedlich strukturiert sein kann. Letztlich stelle ich die Hypothese auf, dass fĂŒr lokale inhibitorische KonnektivitĂ€t eine Abweichung von „blanket of inhibition― zur „maßgeschneiderten― Inhibition zur Lösung spezifischer computationeller Probleme vorteilhaft sein kann.Local microcircuits in the brain mediate complex computations through the interplay of excitatory and inhibitory neurons. It is generally assumed that fast-spiking parvalbumin basket cells, mediate a non-selective -blanket of inhibition-. This view has been recently challenged by reports structured inhibitory connectivity, but it’s precise organization and relevance remain unresolved. In this thesis, I present the results of our studies examining the properties of fast-spiking parvalbumin basket cells in the superficial medial entorhinal cortex and presubiculum of the rat. Characterizing these interneurons in the dorsal and ventral medial entorhinal cortex, we found basket cells of the two subregions are more likely to be connected to principal cells in the dorsal compared to the ventral region. This difference is correlated with changes in grid physiology. Our findings further indicated that inhibitory connectivity is essential for local computation in the presubiculum. Interestingly though, we found that in this region, local inhibition is lower than in the medial entorhinal cortex, suggesting a different microcircuit organizational principle. To study this difference, we analyzed the properties of fast-spiking basket cells in the presubiculum and found a characteristic spatially organized connectivity principle, facilitated by the polarized axons of the presubicular fast-spiking basket cells. Our network simulations showed that such polarized inhibition can improve head direction tuning of principal cells. Overall, our results show that inhibitory connectivity is differently organized in the medial entorhinal cortex and the presubiculum, likely due to functional requirements of the local microcircuit. As a conclusion to the studies presented in this thesis, I hypothesize that a deviation from the blanket of inhibition, towards a region-specific, tailored inhibition can provide solutions to distinct computational problems

    Water and Brain Function: Effects of Hydration Status on Neurostimulation and Neurorecording

    Get PDF
    Introduction: TMS and EEG are used to study normal neurophysiology, diagnose, and treat clinical neuropsychiatric conditions, but can produce variable results or fail. Both techniques depend on electrical volume conduction, and thus brain volumes. Hydration status can affect brain volumes and functions (including cognition), but effects on these techniques are unknown. We aimed to characterize the effects of hydration on TMS, EEG, and cognitive tasks. Methods: EEG and EMG were recorded during single-pulse TMS, paired-pulse TMS, and cognitive tasks from 32 human participants on dehydrated (12-hour fast/thirst) and rehydrated (1 Liter oral water ingestion in 1 hour) testing days. Hydration status was confirmed with urinalysis. MEP, ERP, and network analyses were performed to examine responses at the muscle, brain, and higher-order functioning. Results: Rehydration decreased motor threshold (increased excitability) and shifted the motor hotspot. Significant effects on TMS measures occurred despite being re-localized and re-dosed to these new parameters. Rehydration increased SICF of the MEP, magnitudes of specific TEP peaks in inhibitory protocols, specific ERP peak magnitudes and reaction time during the cognitive task. Rehydration amplified nodal inhibition around the stimulation site in inhibitory paired-pulse networks and strengthened nodes outside the stimulation site in excitatory and CSP networks. Cognitive performance was not improved by rehydration, although similar performance was achieved with generally weaker network activity. Discussion: Results highlight differences between mild dehydration and rehydration. The rehydrated brain was easier to stimulate with TMS and produced larger responses to external and internal stimuli. This is explainable by the known physiology of body water dynamics, which encompass macroscopic and microscopic volume changes. Rehydration can shift 3D cortical positioning, decrease scalp cortex distance (bringing cortex closer to stimulator/recording electrodes), and cause astrocyte swelling-induced glutamate release. Conclusions: Previously unaccounted variables like osmolarity, astrocyte and brain volumes likely affect neurostimulation/neurorecording. Controlling for and carefully manipulating hydration may reduce variability and improve therapeutic outcomes of neurostimulation. Dehydration is common and produces less excitable circuits. Rehydration should offer a mechanism to macroscopically bring target cortical areas closer to an externally applied neurostimulation device to recruit greater volumes of tissue and microscopically favor excitability in the stimulated circuits

    Multi-modal and multi-model interrogation of large-scale functional brain networks

    Get PDF
    Existing whole-brain models are generally tailored to the modelling of a particular data modality (e.g., fMRI or MEG/EEG). We propose that despite the differing aspects of neural activity each modality captures, they originate from shared network dynamics. Building on the universal principles of self-organising delay-coupled nonlinear systems, we aim to link distinct features of brain activity - captured across modalities - to the dynamics unfolding on a macroscopic structural connectome. To jointly predict connectivity, spatiotemporal and transient features of distinct signal modalities, we consider two large-scale models - the Stuart Landau and Wilson and Cowan models - which generate short-lived 40 Hz oscillations with varying levels of realism. To this end, we measure features of functional connectivity and metastable oscillatory modes (MOMs) in fMRI and MEG signals - and compare them against simulated data. We show that both models can represent MEG functional connectivity (FC), functional connectivity dynamics (FCD) and generate MOMs to a comparable degree. This is achieved by adjusting the global coupling and mean conduction time delay and, in the WC model, through the inclusion of balance between excitation and inhibition. For both models, the omission of delays dramatically decreased the performance. For fMRI, the SL model performed worse for FCD and MOMs, highlighting the importance of balanced dynamics for the emergence of spatiotemporal and transient patterns of ultra-slow dynamics. Notably, optimal working points varied across modalities and no model was able to achieve a correlation with empirical FC higher than 0.4 across modalities for the same set of parameters. Nonetheless, both displayed the emergence of FC patterns that extended beyond the constraints of the anatomical structure. Finally, we show that both models can generate MOMs with empirical-like properties such as size (number of brain regions engaging in a mode) and duration (continuous time interval during which a mode appears). Our results demonstrate the emergence of static and dynamic properties of neural activity at different timescales from networks of delay-coupled oscillators at 40 Hz. Given the higher dependence of simulated FC on the underlying structural connectivity, we suggest that mesoscale heterogeneities in neural circuitry may be critical for the emergence of parallel cross-modal functional networks and should be accounted for in future modelling endeavours

    Acetylcholine in the Interpositus Cerebellar Nuclei

    Get PDF
    The interpositus cerebellar nuclei are important for motor control and coordinate movementsacross multiple muscle groups, including limbs, face, and neck. The interpositus nuclei receivedense cholinergic inputs from the pedunculopontine tegmental nucleus (PPN), one of themain sources of cerebellar acetylcholine, but the role of this cholinergic neuromodulatoryinput is not completely understood.The work presented in this thesis found that activating cholinergic receptors in vitro had mixedeffects on the electrophysiological properties of cells in the interpositus cerebellar nuclei. Theintrinsic membrane, action potential and firing properties of cells were recorded at baseline,and after application of cholinergic agonist carbachol. Post-hoc, principal component analysisand k-means cluster analysis were employed to group the cells into two putative groups basedon their different baseline electrophysiological features. These were likely to be two types ofprojection neurons from the interpositus nuclei.The aim of the other work presented in this thesis was to study the role of cholinergicsignalling in motor control. These experiments found that pharmacologically inhibitingmuscarinic receptor signalling in vivo using antagonists impaired performance on the beamwalking task: animals were fully trained on the task that required skilled paw placement tocross a narrow beam. Conversely, blocking signalling via nicotinic receptors improved beamwalking performance.Similarly, chemogenetic inhibition of the PPN projection to the interpositus cerebellar nucleialso improved motor performance on the beam. Finally, a pilot study using transgenic rats(ChAT-Cre to selectively target cholinergic projections), found that chemogenetic inhibition ofthe cholinergic projection from the PPN to the cerebellar nuclei improved motor performance.However, chemogenetic cholinergic inhibition at the start of beam walking training impairedmotor learning.In conclusion, this thesis has presented evidence supporting the role of cholinergic signallingin the cerebellar nuclei in modulating motor performance, motor learning and consummatorybehaviours

    Advancing Methods and Applicability of Simulation-Based Inference in Neuroscience

    Get PDF
    The use of computer simulations as models of real-world phenomena plays an increasingly important role in science and engineering. Such models allow us to build hypotheses about the processes underlying a phenomenon and to test them, e.g., by simulating synthetic data from the model and comparing it to observed data. A key challenge in this approach is to find those model configurations that reproduce the observed data. Bayesian statistical inference provides a principled way to address this challenge, allowing us to infer multiple suitable model configurations and quantify uncertainty. However, classical Bayesian inference methods typically require access to the model's likelihood function and thus cannot be applied to many commonly used scientific simulators. With the increase in available computational resources and the advent of neural network-based machine learning methods, an alternative approach has recently emerged: simulation-based inference (SBI). SBI enables Bayesian parameter inference but only requires access to simulations from the model. Several SBI methods have been developed and applied to individual inference problems in various fields, including computational neuroscience. Yet, many problems in these fields remain beyond the reach of current SBI methods. In addition, while there are many new SBI methods, there are no general guidelines for applying them to new inference problems, hindering their adoption by practitioners. In this thesis, I want to address these challenges by (a) advancing SBI methods for two particular problems in computational neuroscience and (b) improving the general applicability of SBI methods through accessible guidelines and software tools. In my first project, I focus on the use of SBI in cognitive neuroscience by developing an SBI method designed explicitly for computational models used in decision-making research. By building on recent advances in probabilistic machine learning, this new method is substantially more efficient than previous methods, allowing researchers to perform SBI on a broader range of decision-making models. In a second project, I turn to computational connectomics and show how SBI can help to discover connectivity rules underlying the complex connectivity patterns between neurons in the sensory cortex of the rat. As a third contribution, I help establish a software package to facilitate access to current SBI methods, and I present an overview of the workflow required to apply SBI to new inference problems as part of this thesis. Taken together, this thesis enriches the arsenal of SBI methods available for models of decision-making, demonstrates the potential of SBI for applications in computational connectomics, and bridges the gap between SBI method development and applicability, fostering scientific discovery in computational neuroscience and beyond.Der Einsatz von Computermodellen spielt in Wissenschaft und Technik eine immer grĂ¶ĂŸere Rolle. Solche Modelle erlauben es, Hypothesen ĂŒber die einem Forschungsgegenstand zugrundeliegenden Prozesse aufzustellen und diese schrittweise zu verbessern, indem z.B. synthetische Daten aus dem Modell simuliert und mit beobachteten Daten verglichen werden. Allerdings haben Modelle in der Regel unbekannte Parameter. Eine zentrale Herausforderung besteht daher darin, Modellparameter zu finden, die in der Lage sind, die beobachteten Daten zu reproduzieren. Die statistische Methode der Bayes'schen Inferenz bietet eine ideale Lösung fĂŒr diese Herausforderung: Sie ermöglicht es, viele verschiedene Modellparameter gleichzeitig zu testen, alle geeigneten zu identifizieren und dabei statistische Unsicherheiten zu berĂŒcksichtigen. Klassische Methoden der Bayes'schen Inferenz erfordern jedoch den Zugriff auf die sogenannte Likelihood-Funktion des Modells, was fĂŒr viele gĂ€ngige wissenschaftliche Modelle nicht möglich ist, da es sich oft um komplexe Computersimulationen handelt. Mit der Zunahme der Rechenressourcen und dem Aufkommen des maschinellen Lernens wurde ein alternativer Ansatz entwickelt, um dieses Problem zu lösen: Simulationsbasierte Inferenz (SBI). SBI verwendet vom Modell simulierte Daten, um Algorithmen des maschinellen Lernens zu trainieren und ermöglicht so Bayes'sche Parameter-Inferenz fĂŒr komplexe simulationsbasierte wissenschaftliche Modelle. In den letzten Jahren wurden viele SBI-Methoden entwickelt und auf Inferenzprobleme sowohl in den Neurowissenschaften als auch in vielen anderen Bereichen angewendet. Dennoch gibt es noch offene Herausforderungen: Zum einen bleiben viele Modelle aufgrund ihrer KomplexitĂ€t außerhalb der Reichweite aktueller SBI-Methoden. Zum anderen mangelt es an zugĂ€nglichen Softwaretools und Anleitungen, um SBI-Methoden auf neue Inferenzprobleme anzuwenden. In meiner Dissertation möchte ich diese Probleme angehen, indem ich einerseits die SBI-Methodik fĂŒr konkrete Fragestellungen in den Neurowissenschaften verbessere und andererseits die allgemeine Anwendbarkeit von SBI-Methoden durch zugĂ€ngliche Leitlinien und Softwaretools verbessere. In meinem ersten Projekt beschĂ€ftige ich mich mit der Anwendung von SBI in den kognitiven Neurowissenschaften und entwickle eine neue SBI-Methode, die speziell fĂŒr Modelle der Entscheidungsfindung konzipiert ist. Da diese neue Methode auf den jĂŒngsten Fortschritten im Bereich des maschinellen Lernens basiert, ist sie um ein Vielfaches effizienter als frĂŒhere Methoden und kann daher auf ein breiteres Spektrum von Modellen angewendet werden. In einem zweiten Projekt wende ich mich der Konnektomie zu, einem Bereich der Neurowissenschaften, der versucht, die Prinzipien hinter den komplexen KonnektivitĂ€tsmustern im Gehirn zu verstehen. Ich zeige, wie SBI dabei helfen kann, Modelle ĂŒber neue KonnektivitĂ€tsregeln im sensorischen Kortex der Ratte zu testen und an die gemessenen Daten anzupassen. Als drittes Projekt prĂ€sentiere ich einen Leitfaden fĂŒr die Anwendung von SBI auf neue Inferenzprobleme, und ich bin einer der Hauptentwickler eines neuen Softwarepakets, das den Zugang zu aktuellen SBI-Methoden erleichtert. Zusammengenommen wird diese Arbeit den wissenschaftlichen Fortschritt in den Neurowissenschaften und darĂŒber hinaus fördern, indem sie das Arsenal an SBI-Methoden bereichert, das Potential von SBI fĂŒr die Konnektomie aufzeigt und die LĂŒcke zwischen Entwicklung und Anwendbarkeit von SBI-Methoden im Allgemeinen ĂŒberbrĂŒckt

    Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network

    Get PDF
    Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering

    Computational roles of cortico-cerebellar loops in temporal credit assignment

    Get PDF
    Animal survival depends on behavioural adaptation to the environment. This is thought to be enabled by plasticity in the neural circuit. However, the laws which govern neural plasticity are unclear. From a functional aspect, it is desirable to correctly identify, or assign “credit” for, the neurons or synapses responsible for the task decision and subsequent performance. In the biological circuit, the intricate, non-linear interactions involved in neural networks makes appropriately assigning credit to neurons highly challenging. In the temporal domain, this is known as the temporal credit assignment (TCA) problem. This Thesis considers the role the cerebellum – a powerful subcortical structure with strong error-guided plasticity rules – as a solution to TCA in the brain. In particular, I use artificial neural networks as a means to model and understand the mechanisms by which the cerebellum can support learning in the neocortex via the cortico-cerebellar loop. I introduce two distinct but compatible computational models of cortico-cerebellar interaction. The first model asserts that the cerebellum provides the neocortex predictive feedback, modeled in the form of error gradients, with respect to its current activity. This predictive feedback enables better credit assignment in the neocortex and effectively removes the lock between feedforward and feedback processing in cortical networks. This model captures observed long-term deficits associated with cerebellar dysfunction, namely cerebellar dysmetria, in both the motor and non-motor domain. Predictions are also made with respect to alignment of cortico-cerebellar activity during learning and the optimal task conditions for cerebellar contribution. The second model also looks at the role of the cerebellum in learning, but now considers its ability to instantaneously drive the cortex towards desired task dynamics. Unlike the first model, this model does not assume any local cortical plasticity need take place at all and task-directed learning can effectively be outsourced to the cerebellum. This model captures recent optogenetic studies in mice which show the cerebellum as a necessary component for the maintenance of desired cortical dynamics and ensuing behaviour. I also show that this driving input can eventually be used as a teaching signal for the cortical circuit, thereby conceptually unifying the two models. Overall, this Thesis explores the computational role of the cerebellum and cortico-cerebellar loops for task acquisition and maintenance in the brain
    • 

    corecore