76,441 research outputs found

    Bitter taste stimuli induce differential neural codes in mouse brain.

    Get PDF
    A growing literature suggests taste stimuli commonly classified as "bitter" induce heterogeneous neural and perceptual responses. Here, the central processing of bitter stimuli was studied in mice with genetically controlled bitter taste profiles. Using these mice removed genetic heterogeneity as a factor influencing gustatory neural codes for bitter stimuli. Electrophysiological activity (spikes) was recorded from single neurons in the nucleus tractus solitarius during oral delivery of taste solutions (26 total), including concentration series of the bitter tastants quinine, denatonium benzoate, cycloheximide, and sucrose octaacetate (SOA), presented to the whole mouth for 5 s. Seventy-nine neurons were sampled; in many cases multiple cells (2 to 5) were recorded from a mouse. Results showed bitter stimuli induced variable gustatory activity. For example, although some neurons responded robustly to quinine and cycloheximide, others displayed concentration-dependent activity (p<0.05) to quinine but not cycloheximide. Differential activity to bitter stimuli was observed across multiple neurons recorded from one animal in several mice. Across all cells, quinine and denatonium induced correlated spatial responses that differed (p<0.05) from those to cycloheximide and SOA. Modeling spatiotemporal neural ensemble activity revealed responses to quinine/denatonium and cycloheximide/SOA diverged during only an early, at least 1 s wide period of the taste response. Our findings highlight how temporal features of sensory processing contribute differences among bitter taste codes and build on data suggesting heterogeneity among "bitter" stimuli, data that challenge a strict monoguesia model for the bitter quality

    Analysis of Neural Networks in Terms of Domain Functions

    Get PDF
    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a mysterious "black box". Although much research has already been done to "open the box," there is a notable hiatus in known publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods have been used to analyze neural networks. However, these can only be applied in a limited subset of the problem domains where neural network solutions are encountered. In this paper we propose a wider applicable method which, for a given problem domain, involves identifying basic functions with which users in that domain are already familiar, and describing trained neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible description of the neural network's function and, depending on the chosen base functions, it may also provide an insight into the neural network' s inner "reasoning." It could further be used to optimize neural network systems. An analysis in terms of base functions may even make clear how to (re)construct a superior system using those base functions, thus using the neural network as a construction advisor

    A Multi Hidden Recurrent Neural Network with a Modified Grey Wolf Optimizer

    Full text link
    Identifying university students' weaknesses results in better learning and can function as an early warning system to enable students to improve. However, the satisfaction level of existing systems is not promising. New and dynamic hybrid systems are needed to imitate this mechanism. A hybrid system (a modified Recurrent Neural Network with an adapted Grey Wolf Optimizer) is used to forecast students' outcomes. This proposed system would improve instruction by the faculty and enhance the students' learning experiences. The results show that a modified recurrent neural network with an adapted Grey Wolf Optimizer has the best accuracy when compared with other models.Comment: 34 pages, published in PLoS ON
    • …
    corecore