2,488 research outputs found

    Determining how stable network oscillations arise from neuronal and synaptic mechanisms

    Get PDF
    Many animal behaviors involve the generation of rhythmic patterns and movements. These rhythmic patterns are commonly mediated by neural networks that produce an oscillatory activity pattern, where different neurons maintain a relative phase relationship. This thesis examines the relationships between the cellular and synaptic properties that give rise to stable activity in the form of phase maintenance, across different frequencies in a well-suited model system, the pyloric network of the crab Cancer borealis. The pyloric network has endogenously oscillating ‘pacemaker’ neurons that inhibit ‘follower’ neurons, which in turn feed back onto the pacemaker neurons. The focus of this thesis was to determine the methods by which phase maintenance is achieved in an oscillatory network. This thesis examines the idea that phase maintenance occurs through the actions of intrinsic properties of isolated neurons or through the dynamics of their synaptic connections or both. A combination of pharmacological and electrophysiological techniques a used to show how identified membrane properties and short-term synaptic plasticity are involved with phase maintenance over a range of biologically relevant oscillation frequencies. To examine whether network stability is due to the characteristic stable activity of the identified pyloric neuron types, the hypothesis that phase maintenance is an inherent property of synaptically-isolated individual neurons in the pyloric network was first tested. A set of parameters were determined (frequency-dependent activity profile) to define the response of each isolated pyloric neuron to sinusoidal input at different frequencies. The parameters that define the activity profile are: burst onset phase, burst end phase, resonance frequency and intra-burst spike frequency. Each pyloric neuron type was found to possess a unique activity profile, indicating that the individual neuron types are tuned to produce a particular activity pattern at different frequencies depending on their role in the network. To elucidate the biophysical properties underlying the frequency-dependent activity profiles of the neurons, the hyperpolarization activated current (Ih) was measured and found to possess frequency-dependent properties. This implies that Ih has a different influence on the activity phase of pyloric neurons at different frequencies. Additionally, it was found that the Ih contribution to the burst onset phase depends on the neuron type: in the pacemaker group neurons (PD) it had no influence on the burst onset phase at any frequency whereas in follower neurons it acted to advance the onset phase in one neuron type (LP) and, paradoxically, to delay it in a different neuron type (PY). The results from this part of the study provided evidence that stability is due in part to the intrinsic neuronal properties but that these intrinsic properties do not fully explain network stability. To address the contribution of pyloric synapses to network stability, the mechanisms by which synapses promote phase maintenance were investigated. An artificial synapse that mimicked the feedforward PD to LP synapse, was used so that the synaptic parameters could be varied in a controlled manner in order to examine the influence of the properties of this synapse on the postsynaptic LP neuron. It was found that a static synapse with fixed parameters (such as strength and peak phase) across frequencies cannot result in a constant activity phase in the LP neuron. However, if the synaptic strength decreases and the peak phase is delayed as a function of frequency, the LP neuron can maintain a constant activity phase across a large range of frequencies. These dynamic changes in the strength and peak phase of the PD to LP synapse are consistent with the short-term plasticity properties previously reported for this synapse. In the pyloric network, the follower neuron LP provides the sole transmitter-mediated feedback to the pacemaker neurons. To understand the role of this synapse in network stability, this synapse was blocked and replaced by an artificial synapse using the dynamic clamp technique. Different parameters of the artificial synapse, including strength, peak phase, duration and onset phase were found to affect the pyloric cycle period. The most effective parameters that influence cycle period were the synaptic duration and its onset phase. Overall this study demonstrated that both the intrinsic properties of individual neurons and the dynamic properties of the synapses are essential in producing stable activity phases in this oscillatory network. The insight obtained from this thesis can provide a general understanding of the contribution of intrinsic properties to neuronal activity phase and how short-term synaptic dynamics can act to promote phase maintenance in oscillatory networks

    Forecasting Automobile Demand Via Artificial Neural Networks & Neuro-Fuzzy Systems

    Get PDF
    The objective of this research is to obtain an accurate forecasting model for the demand for automobiles in Iran\u27s domestic market. The model is constructed using production data for vehicles manufactured from 2006 to 2016, by Iranian car makers. The increasing demand for transportation and automobiles in Iran necessitated an accurate forecasting model for car manufacturing companies in Iran so that future demand is met. Demand is deduced as a function of the historical data. The monthly gold, rubber, and iron ore prices along with the monthly commodity metals price index and the Stock index of Iran are Artificial neural network (ANN) and artificial neuro-fuzzy system (ANFIS) have been utilized in many fields such as energy consumption and load forecasting fields. The performances of the methodologies are investigated towards obtaining the most accurate forecasting model in terms of the forecast Mean Absolute Percentage Error (MAPE). It was concluded that the feedforward multi-layer perceptron network with back-propagation and the Levenberg-Marquardt learning algorithm provides forecasts with the lowest MAPE (5.85%) among the other models. Further development of the ANN network based on more data is recommended to enhance the model and obtain more accurate networks and subsequently improved forecasts

    Short-term crash risk prediction considering proactive, reactive, and driver behavior factors

    Get PDF
    Providing a safe and efficient transportation system is the primary goal of transportation engineering and planning. Highway crashes are among the most significant challenges to achieving this goal. They result in significant societal toll reflected in numerous fatalities, personal injuries, property damage, and traffic congestion. To that end, much attention has been given to predictive models of crash occurrence and severity. Most of these models are reactive: they use the data about crashes that have occurred in the past to identify the significant crash factors, crash hot-spots and crash-prone roadway locations, analyze and select the most effective countermeasures for reducing the number and severity of crashes. More recently, the advancements have been made in developing proactive crash risk models to assess short-term crash risks in near-real time. Such models could be applied as part of traffic management strategies to prevent and mitigate the crashes. The driver behavior is found to be the leading cause of highway crashes. Nevertheless, due to data unavailability, limited studies have explored and quantified the role of driver behavior in crashes. The Strategic Highway Research Program Naturalistic Driving Study (SHRP 2 NDS) offers an unprecedented opportunity to perform an in-depth analysis of the impacts of driver behavior on crashes events. The research presented in this dissertation is divided into three parts, corresponding to the research objectives. The first part investigates the application of advanced data modeling methods for proactive crash risk analysis. Several proactive models for segment level crash risk and severity assessment are developed and tested, considering the proactive data available to most transportation agencies in real time at a regional network scale. The data include roadway geometry characteristics, traffic flow characteristics, and weather condition data. The analysis methods include Random-effect Bayesian Logistics Regression, Random Forest, Gradient Boosting Machine, K-Nearest Neighbor, Gaussian Naive Bayes (GNB), and Multi-layer Feedforward Deep Neural Network (MLFDNN). The random oversampling technique is applied to deal with the problem of data imbalance associated with the injury severity analysis. The model training and testing are completed using a dataset containing records of 10,155 crashes that occurred on two interstate highways in New Jersey over a period of two years. The second part of the study analyzes the potential improvement in the prediction abilities of the proposed models by adding reactive data (such as vehicle characteristics and driver characteristics) to the analysis. Commonly, the reactive data is only available (known) after the crash occurs. In the proposed research, the crash analysis is performed by classifying crashes in multiple groupings (instead of a single group), constructed based on the age of drivers and vehicles to account for the impact of reactive data on driver injury severity outcomes. The results of the second part of the study show that while the simultaneous use of reactive and proactive data can improve the prediction performance of the models, the absolute crash probability values must be further improved for operational crash risk prediction. To this end, in the third part of the study, the Naturalistic Driving Study data is used to calibrate the crash risk models, including the driver behavior risk factors. The findings show significant improvement in crash prediction accuracy with the inclusion of driver behavior risk factors, which confirms the driver behavior to be the most critical risk factor affecting the crash likelihood and the associated injury severity

    Emergent Bio-Functional Similarities in a Cortical-Spike-Train-Decoding Spiking Neural Network Facilitate Predictions of Neural Computation

    Full text link
    Despite its better bio-plausibility, goal-driven spiking neural network (SNN) has not achieved applicable performance for classifying biological spike trains, and showed little bio-functional similarities compared to traditional artificial neural networks. In this study, we proposed the motorSRNN, a recurrent SNN topologically inspired by the neural motor circuit of primates. By employing the motorSRNN in decoding spike trains from the primary motor cortex of monkeys, we achieved a good balance between classification accuracy and energy consumption. The motorSRNN communicated with the input by capturing and cultivating more cosine-tuning, an essential property of neurons in the motor cortex, and maintained its stability during training. Such training-induced cultivation and persistency of cosine-tuning was also observed in our monkeys. Moreover, the motorSRNN produced additional bio-functional similarities at the single-neuron, population, and circuit levels, demonstrating biological authenticity. Thereby, ablation studies on motorSRNN have suggested long-term stable feedback synapses contribute to the training-induced cultivation in the motor cortex. Besides these novel findings and predictions, we offer a new framework for building authentic models of neural computation

    Study of power plant, carbon capture and transport network through dynamic modelling and simulation

    Get PDF
    The unfavourable role of CO₂ in stimulating climate change has generated concerns as CO₂ levels in the atmosphere continue to increase. As a result, it has been recommended that coal-fired power plants which are major CO₂ emitters should be operated with a carbon capture and storage (CCS) system to reduce CO₂ emission levels from the plant. Studies on CCS chain have been limited except a few high profile projects. Majority of previous studies focused on individual components of the CCS chain which are insufficient to understand how the components of the CCS chain interact dynamically during operation. In this thesis, model-based study of the CCS chain including coal-fired subcritical power plant, post-combustion CO₂ capture (PCC) and pipeline transport components is presented. The component models of the CCS chain are dynamic and were derived from first principles. A separate model involving only the drum-boiler of a typical coal-fired subcritical power plant was also developed using neural networks.The power plant model was validated at steady state conditions for different load levels (70-100%). Analysis with the power plant model show that load change by ramping cause less disturbance than step changes. Rate-based PCC model obtained from Lawal et al. (2010) was used in this thesis. The PCC model was subsequently simplified to reduce the CPU time requirement. The CPU time was reduced by about 60% after simplification and the predictions compared to the detailed model had less than 5% relative difference. The results show that the numerous non-linear algebraic equations and external property calls in the detailed model are the reason for the high CPU time requirement of the detailed PCC model. The pipeline model is distributed and includes elevation profile and heat transfer with the environment. The pipeline model was used to assess the planned Yorkshire and Humber CO₂ pipeline network.Analysis with the CCS chain model indicates that actual changes in CO₂ flowrate entering the pipeline transport system in response to small load changes (about 10%) is very small (<5%). It is therefore concluded that small changes in load will have minimal impact on the transport component of the CCS chain when the capture plant is PCC

    Remembering Forward: Neural Correlates of Memory and Prediction in Human Motor Adaptation

    Get PDF
    We used functional MR imaging (FMRI), a robotic manipulandum and systems identification techniques to examine neural correlates of predictive compensation for spring-like loads during goal-directed wrist movements in neurologically-intact humans. Although load changed unpredictably from one trial to the next, subjects nevertheless used sensorimotor memories from recent movements to predict and compensate upcoming loads. Prediction enabled subjects to adapt performance so that the task was accomplished with minimum effort. Population analyses of functional images revealed a distributed, bilateral network of cortical and subcortical activity supporting predictive load compensation during visual target capture. Cortical regions – including prefrontal, parietal and hippocampal cortices – exhibited trial-by-trial fluctuations in BOLD signal consistent with the storage and recall of sensorimotor memories or “states” important for spatial working memory. Bilateral activations in associative regions of the striatum demonstrated temporal correlation with the magnitude of kinematic performance error (a signal that could drive reward-optimizing reinforcement learning and the prospective scaling of previously learned motor programs). BOLD signal correlations with load prediction were observed in the cerebellar cortex and red nuclei (consistent with the idea that these structures generate adaptive fusimotor signals facilitating cancelation of expected proprioceptive feedback, as required for conditional feedback adjustments to ongoing motor commands and feedback error learning). Analysis of single subject images revealed that predictive activity was at least as likely to be observed in more than one of these neural systems as in just one. We conclude therefore that motor adaptation is mediated by predictive compensations supported by multiple, distributed, cortical and subcortical structures

    An investigation of the cortical learning algorithm

    Get PDF
    Pattern recognition and machine learning fields have revolutionized countless industries and applications from biometric security to modern industrial assembly lines. The fields continue to accelerate as faster, more efficient processing hardware becomes commercially available. Despite the accelerated growth of the pattern recognition and machine learning fields, computers still are unable to learn, reason, and perform rudimentary tasks that humans and animals find routine. Animals are able to move fluidly, understand their environment, and maximize their chances of survival through adaptation - animals demonstrate intelligence. A primary argument in this thesis that we have not yet achieved a level of intelligence similar to humans and animals in the pattern recognition and machine learning fields, not due to a lack of computational power but, rather, due to lack of understanding of how the cortical structures of mammalian brain interact and operate. This thesis describes a cortical learning algorithm (CLA) that models how the cortical structures in the mammalian neocortex operate. Furthermore, a high level understanding of how the cortical structures in the mammalian brain interact, store semantic patterns, and auto-recall these patterns for future predictions are discussed. Finally, we demonstrate that the algorithm can build and maintain a model of its environment and provide feedback for actions and/or classification in a similar fashion to our understanding of cortical operation
    corecore