31 research outputs found

    Optimized Implementation of Neuromorphic HATS Algorithm on FPGA

    Full text link
    In this paper, we present first-ever optimized hardware implementation of a state-of-the-art neuromorphic approach Histogram of Averaged Time Surfaces (HATS) algorithm to event-based object classification in FPGA for asynchronous time-based image sensors (ATIS). Our Implementation achieves latency of 3.3 ms for the N-CARS dataset samples and is capable of processing 2.94 Mevts/s. Speed-up is achieved by using parallelism in the design and multiple Processing Elements can be added. As development platform, Zynq-7000 SoC from Xilinx is used. The tradeoff between Average Absolute Error and Resource Utilization for fixed precision implementation is analyzed and presented. The proposed FPGA implementation is ∼\sim 32 x power efficient compared to software implementation

    Exploiting Nanoelectronic Properties of Memory Chips for Prevention of IC Counterfeiting

    Full text link
    This study presents a methodology for anticounterfeiting of Non-Volatile Memory (NVM) chips. In particular, we experimentally demonstrate a generalized methodology for detecting (i) Integrated Circuit (IC) origin, (ii) recycled or used NVM chips, and (iii) identification of used locations (addresses) in the chip. Our proposed methodology inspects latency and variability signatures of Commercial-Off-The-Shelf (COTS) NVM chips. The proposed technique requires low-cycle (~100) pre-conditioning and utilizes Machine Learning (ML) algorithms. We observe different trends in evolution of latency (sector erase or page write) with cycling on different NVM technologies from different vendors. ML assisted approach is utilized for detecting IC manufacturers with 95.1 % accuracy obtained on prepared test dataset consisting of 3 different NVM technologies including 6 different manufacturers (9 types of chips).Comment: 5 pages, 5 figures, accepted in IEEE NANO 202

    A survey and perspective on neuromorphic continual learning systems

    Get PDF
    With the advent of low-power neuromorphic computing systems, new possibilities have emerged for deployment in various sectors, like healthcare and transport, that require intelligent autonomous applications. These applications require reliable low-power solutions for sequentially adapting to new relevant data without loss of learning. Neuromorphic systems are inherently inspired by biological neural networks that have the potential to offer an efficient solution toward the feat of continual learning. With increasing attention in this area, we present a first comprehensive review of state-of-the-art neuromorphic continual learning (NCL) paradigms. The significance of our study is multi-fold. We summarize the recent progress and propose a plausible roadmap for developing end-to-end NCL systems. We also attempt to identify the gap between research and the real-world deployment of NCL systems in multiple applications. We do so by assessing the recent contributions in neuromorphic continual learning at multiple levels—applications, algorithms, architectures, and hardware. We discuss the relevance of NCL systems and draw out application-specific requisites. We analyze the biological underpinnings that are used for acquiring high-level performance. At the hardware level, we assess the ability of the current neuromorphic platforms and emerging nano-device-based architectures to support these algorithms in the presence of several constraints. Further, we propose refinements to continual learning metrics for applying them to NCL systems. Finally, the review identifies gaps and possible solutions that are not yet focused upon for deploying application-specific NCL systems in real-life scenarios

    A novel multimodal dynamic fusion network for disfluency detection in spoken utterances

    Full text link
    Disfluency, though originating from human spoken utterances, is primarily studied as a uni-modal text-based Natural Language Processing (NLP) task. Based on early-fusion and self-attention-based multimodal interaction between text and acoustic modalities, in this paper, we propose a novel multimodal architecture for disfluency detection from individual utterances. Our architecture leverages a multimodal dynamic fusion network that adds minimal parameters over an existing text encoder commonly used in prior art to leverage the prosodic and acoustic cues hidden in speech. Through experiments, we show that our proposed model achieves state-of-the-art results on the widely used English Switchboard for disfluency detection and outperforms prior unimodal and multimodal systems in literature by a significant margin. In addition, we make a thorough qualitative analysis and show that, unlike text-only systems, which suffer from spurious correlations in the data, our system overcomes this problem through additional cues from speech signals. We make all our codes publicly available on GitHub.Comment: Submitted to ICASSP 2023. arXiv admin note: text overlap with arXiv:2203.1679
    corecore