302 research outputs found

    Pulsatile electrical stimulation of auditory nerve fibres : a modelling approach

    Get PDF
    A stochastic leaky integrate-and-fire nerve model with a dynamical threshold (LIFDT) has been derived for the neural response to sinusoidal electrical stimulation. The LIFDT model incorporates both the refractory effects and the accommodation effects in the interpulse interactions. In this thesis, this phenomenological nerve model is extended for the neural response to pulsatile electrical stimulation, which is widely used in cochlear implants as it reduces inter channel interference. Neurophysiological data from adult guinea pigs were fitted to the LIFDT model. First, the parameters were constrained by the Input/output (I/O) curve analysis. Analysis of the data showed strong accommodation effects. The figures of I/O function for each pulse were plotted according to the physiological data. Fitting the I/O function of the data constrained the value of four variables of LIFDT model. The other five parameters were “optimised by eye”. Although the LIFDT is built with stimulus-dependent threshold, the response of short duration biphasic pulsatile stimuli exhibits weak accommodation effects. Then, in order to avoid the complication of full optimization, analytical approximation of the LIFDT model was derived for pulsatile electrical stimulation. It improves computational efficiency and provides information on how the parameters of the LIFDT model affect the accommodation effects. Theoretical predictions indicate that the LIFDT model could not capture the strong accommodation effects in the neurophysiological data due to structural problems. Alternatively, a Markov renewal process model was utilized to track the pulsetrain response. The stationary and non-stationary Markov renewal process models were fitted to the neurophysiological data. Both models can interpret the conventional PST histograms into conditional probabilities, which are directly related to the interpulse intervals. The consistent results from those two models provide a qualitative analysis of the accommodation characteristics

    A Closed-Loop Bidirectional Brain-Machine Interface System For Freely Behaving Animals

    Get PDF
    A brain-machine interface (BMI) creates an artificial pathway between the brain and the external world. The research and applications of BMI have received enormous attention among the scientific community as well as the public in the past decade. However, most research of BMI relies on experiments with tethered or sedated animals, using rack-mount equipment, which significantly restricts the experimental methods and paradigms. Moreover, most research to date has focused on neural signal recording or decoding in an open-loop method. Although the use of a closed-loop, wireless BMI is critical to the success of an extensive range of neuroscience research, it is an approach yet to be widely used, with the electronics design being one of the major bottlenecks. The key goal of this research is to address the design challenges of a closed-loop, bidirectional BMI by providing innovative solutions from the neuron-electronics interface up to the system level. Circuit design innovations have been proposed in the neural recording front-end, the neural feature extraction module, and the neural stimulator. Practical design issues of the bidirectional neural interface, the closed-loop controller and the overall system integration have been carefully studied and discussed.To the best of our knowledge, this work presents the first reported portable system to provide all required hardware for a closed-loop sensorimotor neural interface, the first wireless sensory encoding experiment conducted in freely swimming animals, and the first bidirectional study of the hippocampal field potentials in freely behaving animals from sedation to sleep. This thesis gives a comprehensive survey of bidirectional BMI designs, reviews the key design trade-offs in neural recorders and stimulators, and summarizes neural features and mechanisms for a successful closed-loop operation. The circuit and system design details are presented with bench testing and animal experimental results. The methods, circuit techniques, system topology, and experimental paradigms proposed in this work can be used in a wide range of relevant neurophysiology research and neuroprosthetic development, especially in experiments using freely behaving animals

    A low phase noise ring oscillator phase-locked loop for wireless applications

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 129).This thesis describes the circuit level design of a 900MHz [Sigma][Detta] ring oscillator based phase-locked loop using 0.35[mu]m technology. Multiple phase noise theories are considered giving insight into low phase-noise voltage controlled oscillator design. The circuit utilizes a fully symmetric differential voltage controlled oscillator with cascode current starved inverters to reduces current noise. A compact multi-modulus prescaler is presented, based on modified true single-phase clock flip-flops with integrated logic. A fully differential charge pump with switched-capacitor common mode feedback is utilized in conjunction with a nonlinear phase-frequency detector for accelerated acquisition time.by Colin Weltin-Wu.M.Eng

    Nonlinear Dynamics of Neural Circuits

    Get PDF

    An Analog VLSI Deep Machine Learning Implementation

    Get PDF
    Machine learning systems provide automated data processing and see a wide range of applications. Direct processing of raw high-dimensional data such as images and video by machine learning systems is impractical both due to prohibitive power consumption and the “curse of dimensionality,” which makes learning tasks exponentially more difficult as dimension increases. Deep machine learning (DML) mimics the hierarchical presentation of information in the human brain to achieve robust automated feature extraction, reducing the dimension of such data. However, the computational complexity of DML systems limits large-scale implementations in standard digital computers. Custom analog signal processing (ASP) can yield much higher energy efficiency than digital signal processing (DSP), presenting means of overcoming these limitations. The purpose of this work is to develop an analog implementation of DML system. First, an analog memory is proposed as an essential component of the learning systems. It uses the charge trapped on the floating gate to store analog value in a non-volatile way. The memory is compatible with standard digital CMOS process and allows random-accessible bi-directional updates without the need for on-chip charge pump or high voltage switch. Second, architecture and circuits are developed to realize an online k-means clustering algorithm in analog signal processing. It achieves automatic recognition of underlying data pattern and online extraction of data statistical parameters. This unsupervised learning system constitutes the computation node in the deep machine learning hierarchy. Third, a 3-layer, 7-node analog deep machine learning engine is designed featuring online unsupervised trainability and non-volatile floating-gate analog storage. It utilizes massively parallel reconfigurable current-mode analog architecture to realize efficient computation. And algorithm-level feedback is leveraged to provide robustness to circuit imperfections in analog signal processing. At a processing speed of 8300 input vectors per second, it achieves 1×1012 operation per second per Watt of peak energy efficiency. In addition, an ultra-low-power tunable bump circuit is presented to provide similarity measures in analog signal processing. It incorporates a novel wide-input-range tunable pseudo-differential transconductor. The circuit demonstrates tunability of bump center, width and height with a power consumption significantly lower than previous works

    Stochastic resonance and finite resolution in a network of leaky integrate-and-fire neurons.

    Get PDF
    This thesis is a study of stochastic resonance (SR) in a discrete implementation of a leaky integrate-and-fire (LIF) neuron network. The aim was to determine if SR can be realised in limited precision discrete systems implemented on digital hardware. How neuronal modelling connects with SR is discussed. Analysis techniques for noisy spike trains are described, ranging from rate coding, statistical measures, and signal processing measures like power spectrum and signal-to-noise ratio (SNR). The main problem in computing spike train power spectra is how to get equi-spaced sample amplitudes given the short duration of spikes relative to their frequency. Three different methods of computing the SNR of a spike train given its power spectrum are described. The main problem is how to separate the power at the frequencies of interest from the noise power as the spike train encodes both noise and the signal of interest. Two models of the LIF neuron were developed, one continuous and one discrete, and the results compared. The discrete model allowed variation of the precision of the simulation values allowing investigation of the effect of precision limitation on SR. The main difference between the two models lies in the evolution of the membrane potential. When both models are allowed to decay from a high start value in the absence of input, the discrete model does not completely discharge while the continuous model discharges to almost zero. The results of simulating the discrete model on an FPGA and the continuous model on a PC showed that SR can be realised in discrete low resolution digital systems. SR was found to be sensitive to the precision of the values in the simulations. For a single neuron, we find that SR increases between 10 bits and 12 bits resolution after which it saturates. For a feed-forward network with multiple input neurons and one output neuron, SR is stronger with more than 6 input neurons and it saturates at a higher resolution. We conclude that stochastic resonance can manifest in discrete systems though to a lesser extent compared to continuous systems

    Dynamics and precursor signs for phase transitions in neural systems

    Get PDF
    This thesis investigates neural state transitions associated with sleep, seizure and anaesthesia. The aim is to address the question: How does a brain traverse the critical threshold between distinct cortical states, both healthy and pathological? Specifically we are interested in sub-threshold neural behaviour immediately prior to state transition. We use theoretical neural modelling (single spiking neurons, a network of these, and a mean-field continuum limit) and in vitro experiments to address this question. Dynamically realistic equations of motion for thalamic relay neuron, reticular nuclei, cortical pyramidal and cortical interneuron in different vigilance states are developed, based on the Izhikevich spiking neuron model. A network of cortical neurons is assembled to examine the behaviour of the gamma-producing cortical network and its transition to lower frequencies due to effect of anaesthesia. Then a three-neuron model for the thalamocortical loop for sleep spindles is presented. Numerical simulations of these networks confirms spiking consistent with reported in vivo measurement results, and provides supporting evidence for precursor indicators of imminent phase transition due to occurrence of individual spindles. To complement the spiking neuron networks, we study the Wilson–Cowan neural mass equations describing homogeneous cortical columns and a 1D spatial cluster of such columns. The abstract representation of cortical tissue by a pair of coupled integro-differential equations permits thorough linear stability, phase plane and bifurcation analyses. This model shows a rich set of spatial and temporal bifurcations marking the boundary to state transitions: saddle-node, Hopf, Turing, and mixed Hopf–Turing. Close to state transition, white-noise-induced subthreshold fluctuations show clear signs of critical slowing down with prolongation and strengthening of autocorrelations, both in time and space, irrespective of bifurcation type. Attempts at in vitro capture of these predicted leading indicators form the last part of the thesis. We recorded local field potentials (LFPs) from cortical and hippocampal slices of mouse brain. State transition is marked by the emergence and cessation of spontaneous seizure-like events (SLEs) induced by bathing the slices in an artificial cerebral spinal fluid containing no magnesium ions. Phase-plane analysis of the LFP time-series suggests that distinct bifurcation classes can be responsible for state change to seizure. Increased variance and growth of spectral power at low frequencies (f < 15 Hz) was observed in LFP recordings prior to initiation of some SLEs. In addition we demonstrated prolongation of electrically evoked potentials in cortical tissue, while forwarding the slice to a seizing regime. The results offer the possibility of capturing leading temporal indicators prior to seizure generation, with potential consequences for understanding epileptogenesis. Guided by dynamical systems theory this thesis captures evidence for precursor signs of phase transitions in neural systems using mathematical and computer-based modelling as well as in vitro experiments

    Resource-Constrained Acquisition Circuits for Next Generation Neural Interfaces

    Get PDF
    The development of neural interfaces allowing the acquisition of signals from the cortex of the brain has seen an increasing amount of interest both in academic research as well as in the commercial space due to their ability to aid people with various medical conditions, such as spinal cord injuries, as well as their potential to allow more seamless interactions between people and machines. While it has already been demonstrated that neural implants can allow tetraplegic patients to control robotic arms, thus to an extent returning some motoric function, the current state of the art often involves the use of heavy table-top instruments connected by wires passing through the patient’s skull, thus making the applications impractical and chronically infeasible. Those limitations are leading to the development of the next generation of neural interfaces that will overcome those issues by being minimal in size and completely wireless, thus paving a way to the possibility of their chronic application. Their development however faces several challenges in numerous aspects of engineering due to constraints presented by their minimal size, amount of power available as well as the materials that can be utilised. The aim of this work is to explore some of those challenges and investigate novel circuit techniques that would allow the implementation of acquisition analogue front-ends under the presented constraints. This is facilitated by first giving an overview of the problematic of recording electrodes and their electrical characterisation in terms of their impedance profile and added noise that can be used to guide the design of analogue front-ends. Continuous time (CT) acquisition is then investigated as a promising signal digitisation technique alternative to more conventional methods in terms of its suitability. This is complemented by a description of practical implementations of a CT analogue-to-digital converter (ADC) including a novel technique of clockless stochastic chopping aimed at the suppression of flicker noise that commonly affects the acquisition of low-frequency signals. A compact design is presented, implementing a 450 nW, 5.5 bit ENOB CT ADC, occupying an area of 0.0288 mm2 in a 0.18 μm CMOS technology, making this the smallest presented design in literature to the best of our knowledge. As completely wireless neural implants rely on power delivered through wireless links, their supply voltage is often subject to large high frequency variations as well voltage uncertainty making it necessary to design reference circuits and voltage regulators providing stable reference voltage and supply in the constrained space afforded to them. This results in numerous challenges that are explored and a design of a practical implementation of a reference circuit and voltage regulator is presented. Two designs in a 0.35 μm CMOS technology are presented, showing respectively a measured PSRR of ≈60 dB and ≈53 dB at DC and a worst-case PSRR of ≈42 dB and ≈33 dB with a less than 1% standard deviation in the output reference voltage of 1.2 V while consuming a power of ≈7 μW. Finally, ΣΔ modulators are investigated for their suitability in neural signal acquisition chains, their properties explained and a practical implementation of a ΣΔ DC-coupled neural acquisition circuit presented. This implements a 10-kHz, 40 dB SNDR ΣΔ analogue front-end implemented in a 0.18 μm CMOS technology occupying a compact area of 0.044 μm2 per channel while consuming 31.1 μW per channel.Open Acces
    corecore