39 research outputs found

    Towards Next Generation Neural Interfaces: Optimizing Power, Bandwidth and Data Quality

    No full text
    In this paper, we review the state-of-the-art in neural interface recording architectures. Through this we identify schemes which show the trade-off between data information quality (lossiness), computation (i.e. power and area requirements) and the number of channels. These trade-offs are then extended by considering the front-end amplifier bandwidth to also be a variable. We therefore explore the possibility of band-limiting the spectral content of recorded neural signals (to save power) and investigate the effect this has on subsequent processing (spike detection accuracy). We identify the spike detection method most robust to such signals, optimize the threshold levels and modify this to exploit such a strategy.Accepted versio

    Resource efficient on-node spike sorting

    Get PDF
    Current implantable brain-machine interfaces are recording multi-neuron activity by utilising multi-channel, multi-electrode micro-electrodes. With the rapid increase in recording capability has come more stringent constraints on implantable system power consumption and size. This is even more so with the increasing demand for wireless systems to increase the number of channels being monitored whilst overcoming the communication bottleneck (in transmitting raw data) via transcutaneous bio-telemetries. For systems observing unit activity, real-time spike sorting within an implantable device offers a unique solution to this problem. However, achieving such data compression prior to transmission via an on-node spike sorting system has several challenges. The inherent complexity of the spike sorting problem arising from various factors (such as signal variability, local field potentials, background and multi-unit activity) have required computationally intensive algorithms (e.g. PCA, wavelet transform, superparamagnetic clustering). Hence spike sorting systems have traditionally been implemented off-line, usually run on work-stations. Owing to their complexity and not-so-well scalability, these algorithms cannot be simply transformed into a resource efficient hardware. On the contrary, although there have been several attempts in implantable hardware, an implementation to match comparable accuracy to off-line within the required power and area requirements for future BMIs have yet to be proposed. Within this context, this research aims to fill in the gaps in the design towards a resource efficient implantable real-time spike sorter which achieves performance comparable to off-line methods. The research covered in this thesis target: 1) Identifying and quantifying the trade-offs on subsequent signal processing performance and hardware resource utilisation of the parameters associated with analogue-front-end. Following the development of a behavioural model of the analogue-front-end and an optimisation tool, the sensitivity of the spike sorting accuracy to different front-end parameters are quantified. 2) Identifying and quantifying the trade-offs associated with a two-stage hybrid solution to realising real-time on-node spike sorting. Initial part of the work focuses from the perspective of template matching only, while the second part of the work considers these parameters from the point of whole system including detection, sorting, and off-line training (template building). A set of minimum requirements are established which ensure robust, accurate and resource efficient operation. 3) Developing new feature extraction and spike sorting algorithms towards highly scalable systems. Based on waveform dynamics of the observed action potentials, a derivative based feature extraction and a spike sorting algorithm are proposed. These are compared with most commonly used methods of spike sorting under varying noise levels using realistic datasets to confirm their merits. The latter is implemented and demonstrated in real-time through an MCU based platform.Open Acces

    Delta rhythms as a substrate for holographic processing in sleep and wakefulness

    Get PDF
    PhD ThesisWe initially considered the theoretical properties and benefits of so-called holographic processing in a specific type of computational problem implied by the theories of synaptic rescaling processes in the biological wake-sleep cycle. This raised two fundamental questions that we attempted to answer by an experimental in vitro electrophysiological approach. We developed a comprehensive experimental paradigm based on a pharmacological model of the wake-sleep-associated delta rhythm measured with a Utah micro-electrode array at the interface between primary and associational areas in the rodent neocortex. We first verified that our in vitro delta rhythm model possessed two key features found in both in vivo rodent and human studies of synaptic rescaling processes in sleep: The first property being that prior local synaptic potentiation in wake leads to increased local delta power in subsequent sleep. The second property is the reactivation in sleep of neural firing patterns observed prior to sleep. By reproducing these findings we confirmed that our model is arguably an adequate medium for further study of the putative sleep-related synaptic rescaling process. In addition we found important differences between neural units that reactivated or deactivated during delta; these were differences in cell types based on unit spike shapes, in prior firing rates and in prior spike-train-to-local-field-potential coherence. Taken together these results suggested a mechanistic chain of explanation of the two observed properties, and set the neurobiological framework for further, more computationally driven analysis. Using the above experimental and theoretical substrate we developed a new method of analysis of micro-electrode array data. The method is a generalization to the electromagnetic case of a well-known technique for processing acoustic microphone array data. This allowed calculation of: The instantaneous spatial energy flow and dissipation in the neocortical areas under the array; The spatial energy source density in analogy to well-known current source density analysis. We then refocused our investigation on the two theoretical questions that we hoped to achieve experimental answers for: Whether the state of the neocortex during a delta rhythm could be described by ergodic statistics, which we determined by analyzing the spectral properties of energy dissipation as a signature of the state of the dynamical system; A more explorative approach prompting an investigation of the spatiotemporal interactions across and along neocortical layers and areas during a delta rhythm, as implied by energy flow patterns. We found that the in vitro rodent neocortex does not conform to ergodic statistics during a pharmacologically driven delta or gamma rhythm. We also found a delta period locked pattern of energy flow across and along layers and areas, which doubled the processing cycle relative to the fundamental delta rhythm, tentatively suggesting a reciprocal, two-stage information processing hierarchy similar to a stochastic Helmholtz machine with a wake-sleep training algorithm. Further, the complex valued energy flow might suggest an improvement to the Helmholtz machine concept by generalizing the complex valued weights of the stochastic network to higher dimensional multi-vectors of a geometric algebra with a metric particularity suited for holographic processes. Finally, preliminary attempts were made to implement and characterize the above network dynamics in silico. We found that a qubit valued network does not allow fully holographic processes, but tentatively suggest that an ebit valued network may display two key properties of general holographic processing

    Untersuchung von Verarbeitungsalgorithmen zur automatischen Auswertung neuronaler Signale aus Multielektroden-Arrays

    Get PDF
    Mit Hilfe von Multielektroden-Arrays (MEAs) können viele Zellen gleichzeitig kontaktiert und deren elektrische Aktivität abgeleitet werden. Für die weitere Analyse müssen die abgeleiteten Signale in ihre Einzelbestandteile zerlegt werden. Dieser Vorgang wird als Spike Sorting bezeichnet. In der vorliegenden Arbeit werden Ansätze für ein vollständig automatisiertes Spike Sorting vorgestellt und untersucht. Dabei werden Verfahren aufgezeigt, die mit Hilfe von adaptiven Verfahren die abgeleiteten Zellsignale optimal filtern und automatisch in deren Einzelkomponenten zerlegen

    Integration eines Neuro-Sensors in ein Messsystem sowie Untersuchungen zur Unit-Separation

    Get PDF
    An der Universität Rostock wurde im Rahmen einer Dissertation ein neuartiger CMOS (Complementary Metal Oxide Semiconductor) Sensorchip zur extrazellulären Analyse elektrisch aktiver biologischer Zellen eingesetzt. Dieser Chip besitzt außer einem MEA (Multielektrodenarray) noch weitere FET (Field Effect Transistor)-basierte Sensoren zur Erfassung unterschiedlicher Zellparameter. Neben der Inbetriebnahme dieses Sensors wurden externe Hardware zur Messwerterfassung und Algorithmen zur Signalaufbereitung entworfen und realisiert. Das Ziel war die Schaffung eines Cell Monitor Systems (CMS) zur teilautomatisierten Nutzung des Silizium-basierten Sensorchips.At the University of Rostock a new hybrid CMOS (Complementary Metal Oxide Semiconductor) sensor chip has been applied to analyse biological electrogenic cells. This chip consists of a MEA (Multi Electrode Array) and several types of FET (Field Effect Transistor) based sensors to monitor substance dependent cell reactions in-vitro. The system consists of the actual sensor chip including a cell culture area, an external hardware platform for data acquisition and digital signal processing algorithms for signal conditioning. Finally a Cell Monitor System (CMS) for the semi-automatic data acquisition was realised to increase the efficiency of the sensor chip usage

    Essays in Risk Management and Asset Pricing with High Frequency Option Panels

    Get PDF
    The thesis investigates the information gains from high frequency equity option data with applications in risk management and empirical asset pricing. Chapter 1 provides the background and motivation of the thesis and outlines the key contributions. Chapter 2 describes the high frequency equity option data in detail. Chapter 3 reviews the theoretical treatments for Recovery Theorem. I derive the formulas for extracting risk neutral central moments from option prices in Chapter 4. In Chapter 5, I specify a perturbation theory on the recovered discount factor, pricing kernel, and the physical probability density. In Chapter 6, a fast and fully-identified sequential programming algorithm is built to apply the Recovery Theorem in practice with noisy market data. I document new empirical evidence on the recovered physical probability distributions and empirical pricing kernels extracted from both index and single-name equity options. Finally, I build a left tail index from the recovered physical probability densities for the S&P 500 index options and show that the left tail index can be used as an indicator of market downside risk. In Chapter 7, I uniquely introduce the higher dimensional option-implied average correlations and provide the procedures for estimating the higher dimensional option-implied average correlations from high frequency option data. In Chapter 8, I construct a market average correlation factor by sorting stocks according to their risk exposures to the option-implied average correlations. I find that (a) the market average correlation factor largely enhances the model-fitting of existing risk-adjusted asset pricing models. (b) the market average correlation factor yields persistent positive risk premiums in cross-sectional stock returns that cannot be explained by other existing risk factors and firm characteristic variables. Chapter 9 concludes the thesis

    Wavelet Theory

    Get PDF
    The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior

    Optimisation of flow chemistry: tools and algorithms

    Get PDF
    The coupling of flow chemistry with automated laboratory equipment has become increasingly common and used to support the efficient manufacturing of chemicals. A variety of reactors and analytical techniques have been used in such configurations for investigating and optimising the processing conditions of different reactions. However, the integrated reactors used thus far have been constrained to single phase mixing, greatly limiting the scope of reactions for such studies. This thesis presents the development and integration of a millilitre-scale CSTR, the fReactor, that is able to process multiphase flows, thus broadening the range of reactions susceptible of being investigated in this way. Following a thorough review of the literature covering the uses of flow chemistry and lab-scale reactor technology, insights on the design of a temperature-controlled version of the fReactor with an accuracy of ±0.3 ºC capable of cutting waiting times 44% when compared to the previous reactor are given. A demonstration of its use is provided for which the product of a multiphasic reaction is analysed automatically under different reaction conditions according to a sampling plan. Metamodeling and cross-validation techniques are applied to these results, where single and multi-objective optimisations are carried out over the response surface models of different metrics to illustrate different trade-offs between them. The use of such techniques allowed reducing the error incurred by the common least squares polynomial fitting by over 12%. Additionally, a demonstration of the fReactor as a tool for synchrotron X-Ray Diffraction is also carried out by means of successfully assessing the change in polymorph caused by solvent switching, this being the first synchrotron experiment using this sort of device. The remainder of the thesis focuses on applying the same metamodeling and cross-validation techniques used previously, in the optimisation of the design of a miniaturised continuous oscillatory baffled reactor. However, rather than using these techniques with physical experimentation, they are used in conjunction with computational fluid dynamics. This reactor shows a better residence time distribution than its CSTR counterparts. Notably, the effect of the introduction of baffle offsetting in a plate design of the reactor is identified as a key parameter in giving a narrow residence time distribution and good mixing. Under this configuration it is possible to reduce the RTD variance by 45% and increase the mixing efficiency by 60% when compared to the best performing opposing baffles geometry

    Non-linear dimensionality reduction on extracellular waveforms reveals cell type diversity in premotor cortex

    Get PDF
    Cortical circuits are thought to contain a large number of cell types that coordinate to produce behavior. Current in vivo methods rely on clustering of specified features of extracellular waveforms to identify putative cell types, but these capture only a small amount of variation. Here, we develop a new method (WaveMAP) that combines non-linear dimensionality reduction with graph clustering to identify putative cell types. We apply WaveMAP to extracellular waveforms recorded from dorsal premotor cortex of macaque monkeys performing a decision-making task. Using WaveMAP, we robustly establish eight waveform clusters and show that these clusters recapitulate previously identified narrow- and broad-spiking types while revealing previously unknown diversity within these subtypes. The eight clusters exhibited distinct laminar distributions, characteristic firing rate patterns, and decision-related dynamics. Such insights were weaker when using feature-based approaches. WaveMAP therefore provides a more nuanced understanding of the dynamics of cell types in cortical circuits.https://elifesciences.org/articles/67490Published versio

    Low-dimensional representations of neural time-series data with applications to peripheral nerve decoding

    Get PDF
    Bioelectronic medicines, implanted devices that influence physiological states by peripheral neuromodulation, have promise as a new way of treating diverse conditions from rheumatism to diabetes. We here explore ways of creating nerve-based feedback for the implanted systems to act in a dynamically adapting closed loop. In a first empirical component, we carried out decoding studies on in vivo recordings of cat and rat bladder afferents. In a low-resolution data-set, we selected informative frequency bands of the neural activity using information theory to then relate to bladder pressure. In a second high-resolution dataset, we analysed the population code for bladder pressure, again using information theory, and proposed an informed decoding approach that promises enhanced robustness and automatic re-calibration by creating a low-dimensional population vector. Coming from a different direction of more general time-series analysis, we embedded a set of peripheral nerve recordings in a space of main firing characteristics by dimensionality reduction in a high-dimensional feature-space and automatically proposed single efficiently implementable estimators for each identified characteristic. For bioelectronic medicines, this feature-based pre-processing method enables an online signal characterisation of low-resolution data where spike sorting is impossible but simple power-measures discard informative structure. Analyses were based on surrogate data from a self-developed and flexibly adaptable computer model that we made publicly available. The wider utility of two feature-based analysis methods developed in this work was demonstrated on a variety of datasets from across science and industry. (1) Our feature-based generation of interpretable low-dimensional embeddings for unknown time-series datasets answers a need for simplifying and harvesting the growing body of sequential data that characterises modern science. (2) We propose an additional, supervised pipeline to tailor feature subsets to collections of classification problems. On a literature standard library of time-series classification tasks, we distilled 22 generically useful estimators and made them easily accessible.Open Acces
    corecore