1,077 research outputs found

    A statistical model for in vivo neuronal dynamics

    Get PDF
    Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. Finally, we show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.Comment: 31 pages, 10 figure

    Voltage sag estimation in sparsely monitored power systems based on deep learning and system area mapping

    Get PDF
    This paper proposes a voltage sag estimation approach based on a deep convolutional neural network. The proposed approach estimates the sag magnitude at unmonitored buses regardless of the system operating conditions and fault location and characteristics. The concept of system area mapping is also introduced via the use of bus matrix, which maps different patches in input matrix to various areas in the power system network. In this way, relevant features are extracted at various local areas in the power system and used in the analysis for higher level feature extraction, before feeding into a fully-connected multiple layer neural network for sag classification. The approach has been tested on the IEEE 68-bus test network and it has been demonstrated that the various sag categories can be identified accurately regardless of the operating condition under which the sags occur

    Radiation Sensing: Design and Deployment of Sensors and Detectors

    Get PDF
    Radiation detection is important in many fields, and it poses significant challenges for instrument designers. Radiation detection instruments, particularly for nuclear decommissioning and security applications, are required to operate in unknown environments and should detect and characterise radiation fields in real time. This book covers both theory and practice, and it solicits recent advances in radiation detection, with a particular focus on radiation detection instrument design, real-time data processing, radiation simulation and experimental work, robot design, control systems, task planning and radiation shielding

    An improved algorithm for phase-based voltage dip classification

    Get PDF
    Word processed copy.Includes bibliographical references (leaves 71-72)In this thesis, a new phase-based algorithm is developed, which overcomes the shortcomings of the Bollen algorithms. The new algorithm computes the dip type based on the difference in phase angle between the measured voltages

    Review on solving the inverse problem in EEG source analysis

    Get PDF
    In this primer, we give a review of the inverse problem for EEG source localization. This is intended for the researchers new in the field to get insight in the state-of-the-art techniques used to find approximate solutions of the brain sources giving rise to a scalp potential recording. Furthermore, a review of the performance results of the different techniques is provided to compare these different inverse solutions. The authors also include the results of a Monte-Carlo analysis which they performed to compare four non parametric algorithms and hence contribute to what is presently recorded in the literature. An extensive list of references to the work of other researchers is also provided

    Power Quality

    Get PDF
    Electrical power is becoming one of the most dominant factors in our society. Power generation, transmission, distribution and usage are undergoing signifi cant changes that will aff ect the electrical quality and performance needs of our 21st century industry. One major aspect of electrical power is its quality and stability – or so called Power Quality. The view on Power Quality did change over the past few years. It seems that Power Quality is becoming a more important term in the academic world dealing with electrical power, and it is becoming more visible in all areas of commerce and industry, because of the ever increasing industry automation using sensitive electrical equipment on one hand and due to the dramatic change of our global electrical infrastructure on the other. For the past century, grid stability was maintained with a limited amount of major generators that have a large amount of rotational inertia. And the rate of change of phase angle is slow. Unfortunately, this does not work anymore with renewable energy sources adding their share to the grid like wind turbines or PV modules. Although the basic idea to use renewable energies is great and will be our path into the next century, it comes with a curse for the power grid as power fl ow stability will suff er. It is not only the source side that is about to change. We have also seen signifi cant changes on the load side as well. Industry is using machines and electrical products such as AC drives or PLCs that are sensitive to the slightest change of power quality, and we at home use more and more electrical products with switching power supplies or starting to plug in our electric cars to charge batt eries. In addition, many of us have begun installing our own distributed generation systems on our rooft ops using the latest solar panels. So we did look for a way to address this severe impact on our distribution network. To match supply and demand, we are about to create a new, intelligent and self-healing electric power infrastructure. The Smart Grid. The basic idea is to maintain the necessary balance between generators and loads on a grid. In other words, to make sure we have a good grid balance at all times. But the key question that you should ask yourself is: Does it also improve Power Quality? Probably not! Further on, the way how Power Quality is measured is going to be changed. Traditionally, each country had its own Power Quality standards and defi ned its own power quality instrument requirements. But more and more international harmonization efforts can be seen. Such as IEC 61000-4-30, which is an excellent standard that ensures that all compliant power quality instruments, regardless of manufacturer, will produce of measurement instruments so that they can also be used in volume applications and even directly embedded into sensitive loads. But work still has to be done. We still use Power Quality standards that have been writt en decades ago and don’t match today’s technology any more, such as fl icker standards that use parameters that have been defi ned by the behavior of 60-watt incandescent light bulbs, which are becoming extinct. Almost all experts are in agreement - although we will see an improvement in metering and control of the power fl ow, Power Quality will suff er. This book will give an overview of how power quality might impact our lives today and tomorrow, introduce new ways to monitor power quality and inform us about interesting possibilities to mitigate power quality problems. Regardless of any enhancements of the power grid, “Power Quality is just compatibility” like my good old friend and teacher Alex McEachern used to say. Power Quality will always remain an economic compromise between supply and load. The power available on the grid must be suffi ciently clean for the loads to operate correctly, and the loads must be suffi ciently strong to tolerate normal disturbances on the grid

    Microarray Data Mining and Gene Regulatory Network Analysis

    Get PDF
    The novel molecular biological technology, microarray, makes it feasible to obtain quantitative measurements of expression of thousands of genes present in a biological sample simultaneously. Genome-wide expression data generated from this technology are promising to uncover the implicit, previously unknown biological knowledge. In this study, several problems about microarray data mining techniques were investigated, including feature(gene) selection, classifier genes identification, generation of reference genetic interaction network for non-model organisms and gene regulatory network reconstruction using time-series gene expression data. The limitations of most of the existing computational models employed to infer gene regulatory network lie in that they either suffer from low accuracy or computational complexity. To overcome such limitations, the following strategies were proposed to integrate bioinformatics data mining techniques with existing GRN inference algorithms, which enables the discovery of novel biological knowledge. An integrated statistical and machine learning (ISML) pipeline was developed for feature selection and classifier genes identification to solve the challenges of the curse of dimensionality problem as well as the huge search space. Using the selected classifier genes as seeds, a scale-up technique is applied to search through major databases of genetic interaction networks, metabolic pathways, etc. By curating relevant genes and blasting genomic sequences of non-model organisms against well-studied genetic model organisms, a reference gene regulatory network for less-studied organisms was built and used both as prior knowledge and model validation for GRN reconstructions. Networks of gene interactions were inferred using a Dynamic Bayesian Network (DBN) approach and were analyzed for elucidating the dynamics caused by perturbations. Our proposed pipelines were applied to investigate molecular mechanisms for chemical-induced reversible neurotoxicity

    Implementation of gaussian process models for non-linear system identification

    Get PDF
    This thesis is concerned with investigating the use of Gaussian Process (GP) models for the identification of nonlinear dynamic systems. The Gaussian Process model is a non-parametric approach to system identification where the model of the underlying system is to be identified through the application of Bayesian analysis to empirical data. The GP modelling approach has been proposed as an alternative to more conventional methods of system identification due to a number of attractive features. In particular, the Bayesian probabilistic framework employed by the GP model has been shown to have potential in tackling the problems found in the optimisation of complex nonlinear models such as those based on multiple model or neural network structures. Furthermore, due to this probabilistic framework, the predictions made by the GP model are probability distributions composed of mean and variance components. This is in contrast to more conventional methods where a predictive point estimate is typically the output of the model. This additional variance component of the model output has been shown to be of potential use in model-predictive or adaptive control implementations. A further property that is of potential interest to those working on system identification problems is that the GP model has been shown to be particularly effective in identifying models from sparse datasets. Therefore, the GP model has been proposed for the identification of models in off-equilibrium regions of operating space, where more established methods might struggle due to a lack of data. The majority of the existing research into modelling with GPs has concentrated on detailing the mathematical methodology and theoretical possibilities of the approach. Furthermore, much of this research has focused on the application of the method toward statistics and machine learning problems. This thesis investigates the use of the GP model for identifying nonlinear dynamic systems from an engineering perspective. In particular, it is the implementation aspects of the GP model that are the main focus of this work. Due to its non-parametric nature, the GP model may also be considered a ‘black-box’ method as the identification process relies almost exclusively on empirical data, and not on prior knowledge of the system. As a result, the methods used to collect and process this data are of great importance, and the experimental design and data pre-processing aspects of the system identification procedure are investigated in detail. Therefore, in the research presented here the inclusion of prior system knowledge into the overall modelling procedure is shown to be an invaluable asset in improving the overall performance of the GP model. In previous research, the computational implementation of the GP modelling approach has been shown to become problematic for applications where the size of training dataset is large (i.e. one thousand or more points). This is due to the requirement in the GP modelling approach for repeated inversion of a covariance matrix whose size is dictated by the number of points included in the training dataset. Therefore, in order to maintain the computational viability of the approach, a number of different strategies have been proposed to lessen the computational burden. Many of these methods seek to make the covariance matrix sparse through the selection of a subset of existing training data. However, instead of operating on an existing training dataset, in this thesis an alternative approach is proposed where the training dataset is specifically designed to be as small as possible whilst still containing as much information. In order to achieve this goal of improving the ‘efficiency’ of the training dataset, the basis of the experimental design involves adopting a more deterministic approach to exciting the system, rather than the more common random excitation approach used for the identification of black-box models. This strategy is made possible through the active use of prior knowledge of the system. The implementation of the GP modelling approach has been demonstrated on a range of simulated and real-world examples. The simulated examples investigated include both static and dynamic systems. The GP model is then applied to two laboratory-scale nonlinear systems: a Coupled Tanks system where the volume of liquid in the second tank must be predicted, and a Heat Transfer system where the temperature of the airflow along a tube must be predicted. Further extensions to the GP model are also investigated including the propagation of uncertainty from one prediction to the next, the application of sparse matrix methods, and also the use of derivative observations. A feature of the application of GP modelling approach to nonlinear system identification problems is the reliance on the squared exponential covariance function. In this thesis the benefits and limitations of this particular covariance function are made clear, and the use of alternative covariance functions and ‘mixed-model’ implementations is also discussed

    Power Distribution System Event Classification Using Fuzzy Logic

    Get PDF
    This dissertation describes an on-line, non-intrusive, classification system for identifying and reporting normal and abnormal power system events occurring on a distribution feeder based on their underlying cause, using signals acquired at the distribution substation. The event classification system extracts features from acquired signals using signal processing and shape analysis techniques. It then analyzes features and classifies events based on their cause using a fuzzy logic expert system based classifier. The classification system also extracts and reports parameters to assist utilities in locating faulty components. A detailed illustration of the classifier design process is presented. Power distribution system event classification problem is shown to be a large scale classification problem. The reasoning behind the choice of a fuzzy logic based hierarchical expert system classifier to solve this problem is explained in detail. The fuzzy logic based expert system classifier uses generic features, shape based features and event specific features extracted from acquired signals. The design of feature extractors for each of these feature categories is explained. A new, fuzzy logic based, modified Dynamic Time Warping (DTW) algorithm was developed for extracting shape based features. Design of event specific feature extractors for capacitor problems, arcing and overcurrent events are discussed in detail. The fuzzy logic based hierarchical expert system classifier required a new fuzzy inference engine that could efficiently handle a large number of rules and rule chaining. A new fuzzy inference engine was designed for this purpose and the design process is explained in detail. To avoid information overload, an intelligent reporting framework that processes raw classification information generated by the fuzzy classifier and reports events of interest in a timely and user friendly manner was developed. Finally, performance studies were carried out to validate the performance of the designed fuzzy logic based expert system classifier and the intelligent reporting system. The data needed to design and validate the classification system were obtained through the Distribution Fault Anticipation (DFA) data collection plat- form developed by Power System Automation Laboratory (PSAL) at Texas A&M University, sponsored by the Electric Power Research Institute (EPRI) and multiple partner utilities
    corecore