1,693 research outputs found

    Quantum Brain: A Recurrent Quantum Neural Network Model to Describe Eye Tracking of Moving Targets

    Full text link
    A theoretical quantum brain model is proposed using a nonlinear Schroedinger wave equation. The model proposes that there exists a quantum process that mediates the collective response of a neural lattice (classical brain). The model is used to explain eye movements when tracking moving targets. Using a Recurrent Quantum Neural Network(RQNN) while simulating the quantum brain model, two very interesting phenomena are observed. First, as eye sensor data is processed in a classical brain, a wave packet is triggered in the quantum brain. This wave packet moves like a particle. Second, when the eye tracks a fixed target, this wave packet moves not in a continuous but rather in a discrete mode. This result reminds one of the saccadic movements of the eye consisting of 'jumps' and 'rests'. However, such a saccadic movement is intertwined with smooth pursuit movements when the eye has to track a dynamic trajectory. In a sense, this is the first theoretical model explaining the experimental observation reported concerning eye movements in a static scene situation. The resulting prediction is found to be very precise and efficient in comparison to classical objective modeling schemes such as the Kalman filter.Comment: 7 pages, 7 figures submitted to Physical Review Letter

    The Neural Particle Filter

    Get PDF
    The robust estimation of dynamically changing features, such as the position of prey, is one of the hallmarks of perception. On an abstract, algorithmic level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing signals based on the history of observations, provides a mathematical framework for dynamic perception in real time. Since the general, nonlinear filtering problem is analytically intractable, particle filters are considered among the most powerful approaches to approximating the solution numerically. Yet, these algorithms prevalently rely on importance weights, and thus it remains an unresolved question how the brain could implement such an inference strategy with a neuronal population. Here, we propose the Neural Particle Filter (NPF), a weight-less particle filter that can be interpreted as the neuronal dynamics of a recurrently connected neural network that receives feed-forward input from sensory neurons and represents the posterior probability distribution in terms of samples. Specifically, this algorithm bridges the gap between the computational task of online state estimation and an implementation that allows networks of neurons in the brain to perform nonlinear Bayesian filtering. The model captures not only the properties of temporal and multisensory integration according to Bayesian statistics, but also allows online learning with a maximum likelihood approach. With an example from multisensory integration, we demonstrate that the numerical performance of the model is adequate to account for both filtering and identification problems. Due to the weightless approach, our algorithm alleviates the 'curse of dimensionality' and thus outperforms conventional, weighted particle filters in higher dimensions for a limited number of particles

    Canonical Cortical Circuits and the Duality of Bayesian Inference and Optimal Control

    Get PDF
    The duality of sensory inference and motor control has been known since the 1960s and has recently been recognized as the commonality in computations required for the posterior distributions in Bayesian inference and the value functions in optimal control. Meanwhile, an intriguing question about the brain is why the entire neocortex shares a canonical six-layer architecture while its posterior and anterior halves are engaged in sensory processing and motor control, respectively. Here we consider the hypothesis that the sensory and motor cortical circuits implement the dual computations for Bayesian inference and optimal control, or perceptual and value-based decision making, respectively. We first review the classic duality of inference and control in linear quadratic systems and then review the correspondence between dynamic Bayesian inference and optimal control. Based on the architecture of the canonical cortical circuit, we explore how different cortical neurons may represent variables and implement computations.Comment: 13 pages, 3 figur

    Discrete-time neural network based state observer with neural network based control formulation for a class of systems with unmatched uncertainties

    Get PDF
    An observer is a dynamic system that estimates the state variables of another system using noisy measurements, either to estimate unmeasurable states, or to improve the accuracy of the state measurements. The Modified State Observer (MSO) is a technique that uses a standard observer structure modified to include a neural network to estimate system states as well as system uncertainty. It has been used in orbit uncertainty estimation and atmospheric reentry uncertainty estimation problems to correctly estimate unmodeled system dynamics. A form of the MSO has been used to control a nonlinear electrohydraulic system with parameter uncertainty using a simplified linear model. In this paper an extension of the MSO into discrete-time is developed using Lyapunov stability theory. Discrete-time systems are found in all digital hardware implementations, such as that found in a Martian rover, a quadcopter UAV, or digital flight control systems, and have the added benefit of reduced computation time compared to continuous systems. The derived adaptive update law guarantees stability of the error dynamics and boundedness of the neural network weights. To prove the validity of the discrete-time MSO (DMSO) simulation studies are performed using a two wheeled inverted pendulum (TWIP) robot, an unstable nonlinear system with unmatched uncertainties. Using a linear model with parameter uncertainties, the DMSO is shown to correctly estimate the state of the system as well as the system uncertainty, providing state estimates orders of magnitude more accurate, and in periods of time up to 10 times faster than the Discrete Kalman Filter. The DMSO is implemented on an actual TWIP robot to further validate the performance and demonstrate the applicability to discrete-time systems found in many aerospace applications. Additionally, a new form of neural network control is developed to compensate for the unmatched uncertainties that exist in the TWIP system using a state variable as a virtual control input. It is shown that in all cases the neural network based control assists with the controller effectiveness, resulting in the most effective controller, performing on average 53.1% better than LQR control alone --Abstract, page iii

    Glucose-Insulin regulator for type 1 diabetes using high order neural networks

    Get PDF
    In this paper a Glucose-Insulin regulator for Type 1 Diabetes using artificial neural networks (ANN) is proposed. This is done using a discrete recurrent high order neural network in order to identify and control a nonlinear dynamical system which represents the pancreas? beta-cells behavior of a virtual patient. The ANN which reproduces and identifies the dynamical behavior system, is configured as series parallel and trained on line using the extended Kalman filter algorithm to achieve a quickly convergence identification in silico. The control objective is to regulate the glucose-insulin level under different glucose inputs and is based on a nonlinear neural block control law. A safety block is included between the control output signal and the virtual patient with type 1 diabetes mellitus. Simulations include a period of three days. Simulation results are compared during the overnight fasting period in Open-Loop (OL) versus Closed- Loop (CL). Tests in Semi-Closed-Loop (SCL) are made feedforward in order to give information to the control algorithm. We conclude the controller is able to drive the glucose to target in overnight periods and the feedforward is necessary to control the postprandial period

    Training Recurrent Neural Networks With the Levenberg-Marquardt Algorithm for Optimal Control of a Grid-Connected Converter

    Get PDF
    This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications

    Dynamic Data Assimilation

    Get PDF
    Data assimilation is a process of fusing data with a model for the singular purpose of estimating unknown variables. It can be used, for example, to predict the evolution of the atmosphere at a given point and time. This book examines data assimilation methods including Kalman filtering, artificial intelligence, neural networks, machine learning, and cognitive computing

    Vision-Based Lane-Changing Behavior Detection Using Deep Residual Neural Network

    Get PDF
    Accurate lane localization and lane change detection are crucial in advanced driver assistance systems and autonomous driving systems for safer and more efficient trajectory planning. Conventional localization devices such as Global Positioning System only provide road-level resolution for car navigation, which is incompetent to assist in lane-level decision making. The state of art technique for lane localization is to use Light Detection and Ranging sensors to correct the global localization error and achieve centimeter-level accuracy, but the real-time implementation and popularization for LiDAR is still limited by its computational burden and current cost. As a cost-effective alternative, vision-based lane change detection has been highly regarded for affordable autonomous vehicles to support lane-level localization. A deep learning-based computer vision system is developed to detect the lane change behavior using the images captured by a front-view camera mounted on the vehicle and data from the inertial measurement unit for highway driving. Testing results on real-world driving data have shown that the proposed method is robust with real-time working ability and could achieve around 87% lane change detection accuracy. Compared to the average human reaction to visual stimuli, the proposed computer vision system works 9 times faster, which makes it capable of helping make life-saving decisions in time
    corecore