564 research outputs found

    Computationally efficient algorithms and implementations of adaptive deep brain stimulation systems for Parkinson's disease

    Get PDF
    Clinical deep brain stimulation (DBS) is a tool used to mitigate pharmacologically intractable neurodegenerative diseases such as Parkinson's disease (PD), tremor and dystonia. Present implementations of DBS use continuous, high frequency voltage or current pulses so as to mitigate PD. This results in some limitations, among which there is stimulation induced side effects and shortening of pacemaker battery life. Adaptive DBS (aDBS) can be used to overcome a number of these limitations. Adaptive DBS is intended to deliver stimulation precisely only when needed. This thesis presents work undertaken to investigate, propose and develop novel algorithms and implementations of systems for adapting DBS. This thesis proposes four system implementations that could facilitate DBS adaptation either in the form of closed-loop DBS or spatial adaptation. The first method involved the use of dynamic detection to track changes in local field potentials (LFP) which can be indicative of PD symptoms. The work on dynamic detection included the synthesis of validation dataset using mainly autoregressive moving average (ARMA) models to enable the evaluation of a subset of PD detection algorithms for accuracy and complexity trade-offs. The subset of algorithms consisted of feature extraction (FE), dimensionality reduction (DR) and dynamic pattern classification stages. The combination with the best trade-off in terms of accuracy and complexity consisted of discrete wavelet transform (DWT) for FE, maximum ratio method (MRM) for DR and k-nearest neighbours (k-NN) for classification. The MRM is a novel DR method inspired by Fisher's separability criterion. The best combination achieved accuracy measures: F1-score of 97.9%, choice probability of 99.86% and classification accuracy of 99.29%. Regarding complexity, it had an estimated microchip area of 0.84 mm² for estimates in 90 nm CMOS process. The second implementation developed the first known PD detection and monitoring processor. This was achieved using complementary detection, which presents a hardware-efficient method of implementing a PD detection processor for monitoring PD progression in Parkinsonian patients. Complementary detection is achieved by using a combination of weak classifiers to produce a classifier with a higher consistency and confidence level than the individual classifiers in the configuration. The PD detection processor using the same processing stages as the first implementation was validated on an FPGA platform. By mapping the implemented design on a 45 nm CMOS process, the most optimal implementation achieved a dynamic power per channel of 2.26 μW and an area per channel of 0.2384 mm². It also achieved mean accuracy measures: Mathews correlation coefficient (MCC) of 0.6162, an F1-score of 91.38%, and mean classification accuracy of 91.91%. The third implementation proposed a framework for adapting DBS based on a critic-actor control approach. This models the relationship between a trained clinician (critic) and a neuro-modulation system (actor) for modulating DBS. The critic was implemented and validated using machine learning models, and the actor was implemented using a fuzzy controller. Therapy is modulated based on state estimates obtained through the machine learning models. PD suppression was achieved in seven out of nine test cases. The final implementation introduces spatial adaptation for aDBS. Spatial adaptation adjusts to variation in lead position and/or stimulation focus, as poor stimulation focus has been reported to affect therapeutic benefits of DBS. The implementation proposes dynamic current steering systems as a power-efficient implementation for multi-polar multisite current steering, with a particular focus on the output stage of the dynamic current steering system. The output stage uses dynamic current sources in implementing push-pull current sources that are interfaced to 16 electrodes so as to enable current steering. The performance of the output stage was demonstrated using a supply of 3.3 V to drive biphasic current pulses of up to 0.5 mA through its electrodes. The preliminary design of the circuit was implemented in 0.18 μm CMOS technology

    High Fidelity Bioelectric Modelling of the Implanted Cochlea

    Get PDF
    Cochlear implants are medical devices that can restore sound perception in individuals with sensorineural hearing loss (SHL). Since their inception, improvements in performance have largely been driven by advances in signal processing, but progress has plateaued for almost a decade. This suggests that there is a bottleneck at the electrode-tissue interface, which is responsible for enacting the biophysical changes that govern neuronal recruitment. Understanding this interface is difficult because the cochlea is small, intricate, and difficult to access. As such, researchers have turned to modelling techniques to provide new insights. The state-of-the-art involves calculating the electric field using a volume conduction model of the implanted cochlea and coupling it with a neural excitation model to predict the response. However, many models are unable to predict patient outcomes consistently. This thesis aims to improve the reliability of these models by creating high fidelity reconstructions of the inner ear and critically assessing the validity of the underlying and hitherto untested assumptions. Regarding boundary conditions, the evidence suggests that the unmodelled monopolar return path should be accounted for, perhaps by applying a voltage offset at a boundary surface. Regarding vasculature, the models show that large modiolar vessels like the vein of the scala tympani have a strong local effect near the stimulating electrode. Finally, it appears that the oft-cited quasi-static assumption is not valid due to the high permittivity of neural tissue. It is hoped that the study improves the trustworthiness of all bioelectric models of the cochlea, either by validating the claims of existing models, or by prompting improvements in future work. Developing our understanding of the underlying physics will pave the way for advancing future electrode array designs as well as patient-specific simulations, ultimately improving the quality of life for those with SHL

    A Low-Power DSP Architecture for a Fully Implantable Cochlear Implant System-on-a-Chip.

    Full text link
    The National Science Foundation Wireless Integrated Microsystems (WIMS) Engineering Research Center at the University of Michigan developed Systems-on-a-Chip to achieve biomedical implant and environmental monitoring functionality in low-milliwatt power consumption and 1-2 cm3 volume. The focus of this work is implantable electronics for cochlear implants (CIs), surgically implanted devices that utilize existing nerve connections between the brain and inner-ear in cases where degradation of the sensory hair cells in the cochlea has occurred. In the absence of functioning hair cells, a CI processes sound information and stimulates the nderlying nerve cells with currents from implanted electrodes, enabling the patient to understand speech. As the brain of the WIMS CI, the WIMS microcontroller unit (MCU) delivers the communication, signal processing, and storage capabilities required to satisfy the aggressive goals set forth. The 16-bit MCU implements a custom instruction set architecture focusing on power-efficient execution by providing separate data and address register windows, multi-word arithmetic, eight addressing modes, and interrupt and subroutine support. Along with 32KB of on-chip SRAM, a low-power 512-byte scratchpad memory is utilized by the WIMS custom compiler to obtain an average of 18% energy savings across benchmarks. A synthesizable dynamic frequency scaling circuit allows the chip to select a precision on-chip LC or ring oscillator, and perform clock scaling to minimize power dissipation; it provides glitch-free, software-controlled frequency shifting in 100ns, and dissipates only 480ÎĽW. A highly flexible and expandable 16-channel Continuous Interleaved Sampling Digital Signal Processor (DSP) is included as an MCU peripheral component. Modes are included to process data, stimulate through electrodes, and allow experimental stimulation or processing. The entire WIMS MCU occupies 9.18mm2 and consumes only 1.79mW from 1.2V in DSP mode. This is the lowest reported consumption for a cochlear DSP. Design methodologies were analyzed and a new top-down design flow is presented that encourages hardware and software co-design as well as cross-domain verification early in the design process. An O(n) technique for energy-per-instruction estimations both pre- and post-silicon is presented that achieves less than 4% error across benchmarks. This dissertation advances low-power system design while providing an improvement in hearing recovery devices.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91488/1/emarsman_1.pd

    Real-time neural signal processing and low-power hardware co-design for wireless implantable brain machine interfaces

    Get PDF
    Intracortical Brain-Machine Interfaces (iBMIs) have advanced significantly over the past two decades, demonstrating their utility in various aspects, including neuroprosthetic control and communication. To increase the information transfer rate and improve the devices’ robustness and longevity, iBMI technology aims to increase channel counts to access more neural data while reducing invasiveness through miniaturisation and avoiding percutaneous connectors (wired implants). However, as the number of channels increases, the raw data bandwidth required for wireless transmission also increases becoming prohibitive, requiring efficient on-implant processing to reduce the amount of data through data compression or feature extraction. The fundamental aim of this research is to develop methods for high-performance neural spike processing co-designed within low-power hardware that is scaleable for real-time wireless BMI applications. The specific original contributions include the following: Firstly, a new method has been developed for hardware-efficient spike detection, which achieves state-of-the-art spike detection performance and significantly reduces the hardware complexity. Secondly, a novel thresholding mechanism for spike detection has been introduced. By incorporating firing rate information as a key determinant in establishing the spike detection threshold, we have improved the adaptiveness of spike detection. This eventually allows the spike detection to overcome the signal degradation that arises due to scar tissue growth around the recording site, thereby ensuring enduringly stable spike detection results. The long-term decoding performance, as a consequence, has also been improved notably. Thirdly, the relationship between spike detection performance and neural decoding accuracy has been investigated to be nonlinear, offering new opportunities for further reducing transmission bandwidth by at least 30% with minor decoding performance degradation. In summary, this thesis presents a journey toward designing ultra-hardware-efficient spike detection algorithms and applying them to reduce the data bandwidth and improve neural decoding performance. The software-hardware co-design approach is essential for the next generation of wireless brain-machine interfaces with increased channel counts and a highly constrained hardware budget. The fundamental aim of this research is to develop methods for high-performance neural spike processing co-designed within low-power hardware that is scaleable for real-time wireless BMI applications. The specific original contributions include the following: Firstly, a new method has been developed for hardware-efficient spike detection, which achieves state-of-the-art spike detection performance and significantly reduces the hardware complexity. Secondly, a novel thresholding mechanism for spike detection has been introduced. By incorporating firing rate information as a key determinant in establishing the spike detection threshold, we have improved the adaptiveness of spike detection. This eventually allows the spike detection to overcome the signal degradation that arises due to scar tissue growth around the recording site, thereby ensuring enduringly stable spike detection results. The long-term decoding performance, as a consequence, has also been improved notably. Thirdly, the relationship between spike detection performance and neural decoding accuracy has been investigated to be nonlinear, offering new opportunities for further reducing transmission bandwidth by at least 30\% with only minor decoding performance degradation. In summary, this thesis presents a journey toward designing ultra-hardware-efficient spike detection algorithms and applying them to reduce the data bandwidth and improve neural decoding performance. The software-hardware co-design approach is essential for the next generation of wireless brain-machine interfaces with increased channel counts and a highly constrained hardware budget.Open Acces

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio

    Advanced sensors technology survey

    Get PDF
    This project assesses the state-of-the-art in advanced or 'smart' sensors technology for NASA Life Sciences research applications with an emphasis on those sensors with potential applications on the space station freedom (SSF). The objectives are: (1) to conduct literature reviews on relevant advanced sensor technology; (2) to interview various scientists and engineers in industry, academia, and government who are knowledgeable on this topic; (3) to provide viewpoints and opinions regarding the potential applications of this technology on the SSF; and (4) to provide summary charts of relevant technologies and centers where these technologies are being developed

    Adaptive extreme edge computing for wearable devices

    Get PDF
    Wearable devices are a fast-growing technology with impact on personal healthcare for both society and economy. Due to the widespread of sensors in pervasive and distributed networks, power consumption, processing speed, and system adaptation are vital in future smart wearable devices. The visioning and forecasting of how to bring computation to the edge in smart sensors have already begun, with an aspiration to provide adaptive extreme edge computing. Here, we provide a holistic view of hardware and theoretical solutions towards smart wearable devices that can provide guidance to research in this pervasive computing era. We propose various solutions for biologically plausible models for continual learning in neuromorphic computing technologies for wearable sensors. To envision this concept, we provide a systematic outline in which prospective low power and low latency scenarios of wearable sensors in neuromorphic platforms are expected. We successively describe vital potential landscapes of neuromorphic processors exploiting complementary metal-oxide semiconductors (CMOS) and emerging memory technologies (e.g. memristive devices). Furthermore, we evaluate the requirements for edge computing within wearable devices in terms of footprint, power consumption, latency, and data size. We additionally investigate the challenges beyond neuromorphic computing hardware, algorithms and devices that could impede enhancement of adaptive edge computing in smart wearable devices

    The use of acoustic cues in phonetic perception: Effects of spectral degradation, limited bandwidth and background noise

    Get PDF
    Hearing impairment, cochlear implantation, background noise and other auditory degradations result in the loss or distortion of sound information thought to be critical to speech perception. In many cases, listeners can still identify speech sounds despite degradations, but understanding of how this is accomplished is incomplete. Experiments presented here tested the hypothesis that listeners would utilize acoustic-phonetic cues differently if one or more cues were degraded by hearing impairment or simulated hearing impairment. Results supported this hypothesis for various listening conditions that are directly relevant for clinical populations. Analysis included mixed-effects logistic modeling of contributions of individual acoustic cues for various contrasts. Listeners with cochlear implants (CIs) or normal-hearing (NH) listeners in CI simulations showed increased use of acoustic cues in the temporal domain and decreased use of cues in the spectral domain for the tense/lax vowel contrast and the word-final fricative voicing contrast. For the word-initial stop voicing contrast, NH listeners made less use of voice-onset time and greater use of voice pitch in conditions that simulated high-frequency hearing impairment and/or masking noise; influence of these cues was further modulated by consonant place of articulation. A pair of experiments measured phonetic context effects for the "s/sh" contrast, replicating previously observed effects for NH listeners and generalizing them to CI listeners as well, despite known deficiencies in spectral resolution for CI listeners. For NH listeners in CI simulations, these context effects were absent or negligible. Audio-visual delivery of this experiment revealed enhanced influence of visual lip-rounding cues for CI listeners and NH listeners in CI simulations. Additionally, CI listeners demonstrated that visual cues to gender influence phonetic perception in a manner consistent with gender-related voice acoustics. All of these results suggest that listeners are able to accommodate challenging listening situations by capitalizing on the natural (multimodal) covariance in speech signals. Additionally, these results imply that there are potential differences in speech perception by NH listeners and listeners with hearing impairment that would be overlooked by traditional word recognition or consonant confusion matrix analysis
    • …
    corecore