3 research outputs found

    Online Sensor Drift Compensation for E-Nose Systems Using Domain Adaptation and Extreme Learning Machine

    No full text
    Sensor drift is a common issue in E-Nose systems and various drift compensation methods have received fruitful results in recent years. Although the accuracy for recognizing diverse gases under drift conditions has been largely enhanced, few of these methods considered online processing scenarios. In this paper, we focus on building online drift compensation model by transforming two domain adaptation based methods into their online learning versions, which allow the recognition models to adapt to the changes of sensor responses in a time-efficient manner without losing the high accuracy. Experimental results using three different settings confirm that the proposed methods save large processing time when compared with their offline versions, and outperform other drift compensation methods in recognition accuracy

    A Neuromorphic Machine Learning Framework based on the Growth Transform Dynamical System

    Get PDF
    As computation increasingly moves from the cloud to the source of data collection, there is a growing demand for specialized machine learning algorithms that can perform learning and inference at the edge in energy and resource-constrained environments. In this regard, we can take inspiration from small biological systems like insect brains that exhibit high energy-efficiency within a small form-factor, and show superior cognitive performance using fewer, coarser neural operations (action potentials or spikes) than the high-precision floating-point operations used in deep learning platforms. Attempts at bridging this gap using neuromorphic hardware has produced silicon brains that are orders of magnitude inefficient in energy dissipation as well as performance. This is because neuromorphic machine learning (ML) algorithms are traditionally built bottom-up, starting with neuron models that mimic the response of biological neurons and connecting them together to form a network. Neural responses and weight parameters are therefore not optimized w.r.t. any system objective, and it is not evident how individual spikes and the associated population dynamics are related to a network objective. On the other hand, conventional ML algorithms follow a top-down synthesis approach, starting from a system objective (that usually only models task efficiency), and reducing the problem to the model of a non-spiking neuron with non-local updates and little or no control over the population dynamics. I propose that a reconciliation of the two approaches may be key to designing scalable spiking neural networks that optimize for both energy and task efficiency under realistic physical constraints, while enabling spike-based encoding and learning based on local updates in an energy-based framework like traditional ML models. To this end, I first present a neuron model implementing a mapping based on polynomial growth transforms, which allows for independent control over spike forms and transient firing statistics. I show how spike responses are generated as a result of constraint violation while minimizing a physically plausible energy functional involving a continuous-valued neural variable, that represents the local power dissipation in a neuron. I then show how the framework could be extended to coupled neurons in a network by remapping synaptic interactions in a standard spiking network. I show how the network could be designed to perform a limited amount of learning in an energy-efficient manner even without synaptic adaptation by appropriate choices of network structure and parameters - through spiking SVMs that learn to allocate switching energy to neurons that are more important for classification and through spiking associative memory networks that learn to modulate their responses based on global activity. Lastly, I describe a backpropagation-less learning framework for synaptic adaptation where weight parameters are optimized w.r.t. a network-level loss function that represents spiking activity across the network, but which produces updates that are local. I show how the approach can be used for unsupervised and supervised learning such that minimizing a training error is equivalent to minimizing the network-level spiking activity. I build upon this framework to introduce end-to-end spiking neural network (SNN) architectures and demonstrate their applicability for energy and resource-efficient learning using a benchmark dataset
    corecore