245 research outputs found

    Understanding the Role of Dynamics in Brain Networks: Methods, Theory and Application

    Get PDF
    The brain is inherently a dynamical system whose networks interact at multiple spatial and temporal scales. Understanding the functional role of these dynamic interactions is a fundamental question in neuroscience. In this research, we approach this question through the development of new methods for characterizing brain dynamics from real data and new theories for linking dynamics to function. We perform our study at two scales: macro (at the level of brain regions) and micro (at the level of individual neurons). In the first part of this dissertation, we develop methods to identify the underlying dynamics at macro-scale that govern brain networks during states of health and disease in humans. First, we establish an optimization framework to actively probe connections in brain networks when the underlying network dynamics are changing over time. Then, we extend this framework to develop a data-driven approach for analyzing neurophysiological recordings without active stimulation, to describe the spatiotemporal structure of neural activity at different timescales. The overall goal is to detect how the dynamics of brain networks may change within and between particular cognitive states. We present the efficacy of this approach in characterizing spatiotemporal motifs of correlated neural activity during the transition from wakefulness to general anesthesia in functional magnetic resonance imaging (fMRI) data. Moreover, we demonstrate how such an approach can be utilized to construct an automatic classifier for detecting different levels of coma in electroencephalogram (EEG) data. In the second part, we study how ongoing function can constraint dynamics at micro-scale in recurrent neural networks, with particular application to sensory systems. Specifically, we develop theoretical conditions in a linear recurrent network in the presence of both disturbance and noise for exact and stable recovery of dynamic sparse stimuli applied to the network. We show how network dynamics can affect the decoding performance in such systems. Moreover, we formulate the problem of efficient encoding of an afferent input and its history in a nonlinear recurrent network. We show that a linear neural network architecture with a thresholding activation function is emergent if we assume that neurons optimize their activity based on a particular cost function. Such an architecture can enable the production of lightweight, history-sensitive encoding schemes

    Situational awareness in low-observable distribution grid - exploiting sparsity and multi-timescale data

    Get PDF
    Doctor of PhilosophyDepartment of Electrical and Computer EngineeringBalasubramaniam NatarajanThe power distribution grid is typically unobservable due to a lack of real-time measurements. While deploying more sensors can alleviate this issue, it also presents new challenges related to data aggregation and the underlying communication infrastructure. Limited real-time measurements hinders the distribution system state estimation (DSSE). DSSE involves estimation of the system states (i.e., voltage magnitude and voltage angle) based on available measurements and system model information. To cope with the unobservability issue, sparsity-based DSSE approaches allow us to recover system state information from a small number of measurements, provided the states of the distribution system exhibit sparsity. However, these approaches perform poorly in the presence of outliers in measurements and errors in system model information. In this dissertation, we first develop robust formulations of sparsity-based DSSE to deal with uncertainties in the system model and measurement data in a low-observable distribution grid. We also combine the advantages of two sparsity-based DSSE approaches to estimate grid states with high fidelity in low observability regions. In practical distribution systems, information from field sensors and meters are unevenly sampled at different time scales and could be lost during the transmission process. It is critical to effectively aggregate these information sources for DSSE as well as other tasks related to situational awareness. To address this challenge, the second part of this dissertation proposes a Bayesian framework for multi-timescale data aggregation and matrix completion-based state estimation. Specifically, the multi-scale time-series data aggregated from heterogeneous sources are reconciled using a multitask Gaussian process. The resulting consistent time-series alongwith the confidence bound on the imputations are fed into a Bayesian matrix completion method augmented with linearized power-flow constraints for accurate state estimation low-observable distribution system. We also develop a computationally efficient recursive Gaussian process approach that is capable of handling batch-wise or real-time measurements while leveraging the network connectivity information of the grid. To further enhance the scalability and accuracy, we develop neural network-based approaches (latent neural ordinary differential equation approach and stochastic neural differential equation with recurrent neural network approach) to aggregate irregular time-series data in the distribution grid. The stochastic neural differential equation and recurrent neural network also allows us to quantify the uncertainty in a holistic manner. Simulation results on the different IEEE unbalanced test systems illustrate the high fidelity of the Bayesian and neural network-based methods in aggregating multi-timescale measurements. Lastly, we develop phase, and outage awareness approaches for power distribution grid. In this regard, we first design a graph signal processing approach that identifies the phase labels in the presence of limited measurements and incorrect phase labeling. The second approach proposes a novel outage detector for identifying all outages in a reconfigurable distribution network. Simulation results on standard IEEE test systems reveal the potential of these methods to improve situational awareness

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Identification and Optimal Linear Tracking Control of ODU Autonomous Surface Vehicle

    Get PDF
    Autonomous surface vehicles (ASVs) are being used for diverse applications of civilian and military importance such as: military reconnaissance, sea patrol, bathymetry, environmental monitoring, and oceanographic research. Currently, these unmanned tasks can accurately be accomplished by ASVs due to recent advancements in computing, sensing, and actuating systems. For this reason, researchers around the world have been taking interest in ASVs for the last decade. Due to the ever-changing surface of water and stochastic disturbances such as wind and tidal currents that greatly affect the path-following ability of ASVs, identification of an accurate model of inherently nonlinear and stochastic ASV system and then designing a viable control using that model for its planar motion is a challenging task. For planar motion control of ASV, the work done by researchers is mainly based on the theoretical modeling in which the nonlinear hydrodynamic terms are determined, while some work suggested the nonlinear control techniques and adhered to simulation results. Also, the majority of work is related to the mono- or twin-hull ASVs with a single rudder. The ODU-ASV used in present research is a twin-hull design having two DC trolling motors for path-following motion. A novel approach of time-domain open-loop observer Kalman filter identifications (OKID) and state-feedback optimal linear tracking control of ODU-ASV is presented, in which a linear state-space model of ODU-ASV is obtained from the measured input and output data. The accuracy of the identified model for ODU-ASV is confirmed by validation results of model output data reconstruction and benchmark residual analysis. Then, the OKID-identified model of the ODU-ASV is utilized to design the proposed controller for its planar motion such that a predefined cost function is minimized using state and control weighting matrices, which are determined by a multi-objective optimization genetic algorithm technique. The validation results of proposed controller using step inputs as well as sinusoidal and arc-like trajectories are presented to confirm the controller performance. Moreover, real-time water-trials were performed and their results confirm the validity of proposed controller in path-following motion of ODU-ASV

    Data driven techniques for modal decomposition and reduced-order modelling of fluids

    Get PDF
    In this thesis, a number of data-driven techniques are proposed for the analysis and extraction of reduced-order models of fluid flows. Throughout the thesis, there has been an emphasis on the practicality and interpretability of data-driven feature-extraction techniques to aid practitioners in flow-control and estimation. The first contribution uses a graph theoretic approach to analyse the similarity of modes extracted using data-driven modal decomposition algorithms to give a more intuitive understanding of the degrees of freedom in the underlying system. The method extracts clusters of spatially and spectrally similar modes by post-processing the modes extracted using DMD and its variants. The second contribution proposes a method for extracting coherent structures, using snapshots of high dimensional measurements, that can be mapped to a low dimensional output of the system. The importance of finding such coherent structures is that in the context of active flow control and estimation, the practitioner often has to rely on a limited number of measurable outputs to estimate the state of the flow. Therefore, ensuring that the extracted flow features can be mapped to the measured outputs of the system can be beneficial for estimating the state of the flow. The third contribution concentrates on using neural networks for exploiting the nonlinear relationships amongst linearly extracted modal time series to find a reduced order state, which can then be used for modelling the dynamics of the flow. The method utilises recurrent neural networks to find an encoding of a high dimensional set of modal time series, and fully connected neural networks to find a mapping between the encoded state and the physically interpretable modal coefficients. As a result of this architecture, the significantly reduced-order representation maintains an automatically extracted relationship to a higher-dimensional, interpretable state.Open Acces

    Real Time Dynamic State Estimation: Development and Application to Power System

    Get PDF
    Since the state estimation algorithm has been firstly proposed, considerable research interest has been shown in adapting and applying the different versions of this algorithm to the power transmission systems. Those applications include power system state estimation (PSSE) and short-term operational planning. In the transmission level, state estimation offers various applications including, process monitoring and security monitoring. Recently, distribution systems experience a much higher level of variability and complexity due to the large increase in the penetration level of distributed energy resources (DER), such as distributed generation (DG), demand-responsive loads, and storage devices. The first step, for better situational awareness at the distribution level, is to adapt the most developed real time state estimation algorithm to distribution systems, including distribution system state estimation (DSSE). DSSE has an important role in the operation of the distribution systems. Motivated by the increasing need for robust and accurate real time state estimators, capable of capturing the dynamics of system states and suitable for large-scale distribution networks with a lack of sensors, this thesis introduces a three state estimators based on a distributed approach. The first proposed estimator technique is the square root cubature Kalman filter (SCKF), which is the improved version of cubature Kalman filter (CKF). The second one is based on a combination of the particle filter (PF) and the SCKF, which yields a square root cubature particle filter (SCPF). This technique employs a PF with the proposal distribution provided by the SCKF. Additionally, a combination of PF and CKF, which yields a cubature particle filter (CPF) is proposed. Unlike the other types of filters, the PF is a non-Gaussian algorithm from which a true posterior distribution of the estimated states can be obtained. This permits the replacement of real measurements with pseudo-measurements and allows the calculation to be applied to large-scale networks with a high degree of nonlinearity. This research also provides a comparison study between the above mentioned algorithms and the latest algorithms available in the literature. To validate their robustness and accuracy, the proposed methods were tested and verified using a large range of customer loads with 50 % uncertainty on a connected IEEE 123-bus system. Next, a developed foretasted aided state estimator is proposed. The foretasted aided state estimator is needed to increase the immunization of the state estimator against the delay and loss of the real measurements, due to the sensors malfunction or communication failure. Moreover, due to the lack of measurements in the electrical distribution system, the pseudo-measurements are needed to insure the observability of the state estimator. Therefore, the very short term load forecasting algorithm that insures the observability and provides reliable backup data in case of sensor malfunction or communication failure is proposed. The proposed very short term load forecasting is based on the wavelet recurrent neural network (WRNN). The historical data used to train the RNN are decomposed into low-frequency, low-high frequency and high frequency components. The neural networks are trained using an extended Kalman filter (EKF) for the low frequency component and using a square root cubature Kalman filter (SCKF) for both low-high frequency and high frequency components. To estimate the system states, state estimation algorithm based SCKF is used. The results demonstrate the theoretical and practical advantages of the proposed methodology. Finally, in recent years several cyber-attacks have been recorded against sensitive monitoring systems. Among them is the automatic generation control (AGC) system, a fundamental control system used in all power networks to keep the network frequency at its desired value and for maintaining tie line power exchanges at their scheduled values. Motivated by the increasing need for robust and safe operation of AGCs, this thesis introduces an attack resilient control scheme for the AGC system based on attack detection using real time state estimation. The proposed approach requires redundancy of sensors available at the transmission level in the power network and leverages recent results on attack detection using mixed integer linear programming (MILP). The proposed algorithm detects and identifies the sensors under attack in the presence of noise. The non-attacked sensors are then averaged and made available to the feedback controller. No assumptions about the nature of the attack signal are made. The proposed method is simulated using a large range of attack signals and uncertain sensors measurements. All the proposed algorithms were implemented in MATLAB to verify their theoretical expectations

    Machine Learning for Physiological Time Series: Representing and Controlling Blood Glucose for Diabetes Management

    Full text link
    Type 1 diabetes is a chronic health condition affecting over one million patients in the US, where blood glucose (sugar) levels are not well regulated by the body. Researchers have sought to use physiological data (e.g., blood glucose measurements) collected from wearable devices to manage this disease, either by forecasting future blood glucose levels for predictive alarms, or by automating insulin delivery for blood glucose management. However, the application of machine learning (ML) to these data is hampered by latent context, limited supervision and complex temporal dependencies. To address these challenges, we develop and evaluate novel ML approaches in the context of i) representing physiological time series, particularly for forecasting blood glucose values and ii) decision making for when and how much insulin to deliver. When learning representations, we leverage the structure of the physiological sequence as an implicit information stream. In particular, we a) incorporate latent context when predicting adverse events by jointly modeling patterns in the data and the context those patterns occurred under, b) propose novel types of self-supervision to handle limited data and c) propose deep models that predict functions underlying trajectories to encode temporal dependencies. In the context of decision making, we use reinforcement learning (RL) for blood glucose management. Through the use of an FDA-approved simulator of the glucoregulatory system, we achieve strong performance using deep RL with and without human intervention. However, the success of RL typically depends on realistic simulators or experimental real-world deployment, neither of which are currently practical for problems in health. Thus, we propose techniques for leveraging imperfect simulators and observational data. Beyond diabetes, representing and managing physiological signals is an important problem. By adapting techniques to better leverage the structure inherent in the data we can help overcome these challenges.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163134/1/ifox_1.pd

    Sparsity through evolutionary pruning prevents neuronal networks from overfitting

    Get PDF
    Modern Machine learning techniques take advantage of the exponentially rising calculation power in new generation processor units. Thus, the number of parameters which are trained to resolve complex tasks was highly increased over the last decades. However, still the networks fail - in contrast to our brain - to develop general intelligence in the sense of being able to solve several complex tasks with only one network architecture. This could be the case because the brain is not a randomly initialized neural network, which has to be trained by simply investing a lot of calculation power, but has from birth some fixed hierarchical structure. To make progress in decoding the structural basis of biological neural networks we here chose a bottom-up approach, where we evolutionarily trained small neural networks in performing a maze task. This simple maze task requires dynamical decision making with delayed rewards. We were able to show that during the evolutionary optimization random severance of connections lead to better generalization performance of the networks compared to fully connected networks. We conclude that sparsity is a central property of neural networks and should be considered for modern Machine learning approaches
    • …
    corecore