12 research outputs found

    An integrated modelling framework for neural circuits with multiple neuromodulators

    Get PDF
    Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to largescale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies

    Improving SNR and reducing training time of classifiers in large datasets via kernel averaging

    Get PDF
    Kernel methods are of growing importance in neuroscience research. As an elegant extension of linear methods, they are able to model complex non-linear relationships. However, since the kernel matrix grows with data size, the training of classifiers is computationally demanding in large datasets. Here, a technique developed for linear classifiers is extended to kernel methods: In linearly separable data, replacing sets of instances by their averages improves signal-to-noise ratio (SNR) and reduces data size. In kernel methods, data is linearly non-separable in input space, but linearly separable in the high-dimensional feature space that kernel methods implicitly operate in. It is shown that a classifier can be efficiently trained on instances averaged in feature space by averaging entries in the kernel matrix. Using artificial and publicly available data, it is shown that kernel averaging improves classification performance substantially and reduces training time, even in non-linearly separable data
    corecore