14 research outputs found

    The Department of Electrical and Computer Engineering Newsletter

    Get PDF
    Summer 2017 News and notes for University of Dayton\u27s Department of Electrical and Computer Engineering.https://ecommons.udayton.edu/ece_newsletter/1010/thumbnail.jp

    Machine Learning for Cyberattack Detection

    Get PDF
    Machine learning has become rapidly utilized in cybersecurity, rising from almost non-existent to currently over half of cybersecurity techniques utilized commercially. Machine learning is advancing at a rapid rate, and the application of new learning techniques to cybersecurity have not been investigate yet. Current technology trends have led to an abundance of household items containing microprocessors all connected within a private network. Thus, network intrusion detection is essential for keeping these networks secure. However, network intrusion detection can be extremely taxing on battery operated devices. The presented work presents a cyberattack detection system based on a multilayer perceptron neural network algorithm. To show that this system can operate at low power, the algorithm was executed on two commercially available minicomputer systems including the Raspberry PI 3 and the Asus Tinkerboard. An analysis of accuracy, power, energy, and timing was performed to study the tradeoffs necessary when executing these algorithms at low power. Our results show that these low power implementations are feasible, and a scan rate of more than 226,000 packets per second can be achieved from a system that requires approximately 5W to operate with greater than 99% accuracy

    Hybrid Deep Learning Techniques for Securing Bioluminescent Interfaces in Internet of Bio Nano Things

    Get PDF
    The Internet of bio-nano things (IoBNT) is an emerging paradigm employing nanoscale (~1–100 nm) biological transceivers to collect in vivo signaling information from the human body and communicate it to healthcare providers over the Internet. Bio-nano-things (BNT) offer external actuation of in-body molecular communication (MC) for targeted drug delivery to otherwise inaccessible parts of the human tissue. BNTs are inter-connected using chemical diffusion channels, forming an in vivo bio-nano network, connected to an external ex vivo environment such as the Internet using bio-cyber interfaces. Bio-luminescent bio-cyber interfacing (BBI) has proven to be promising in realizing IoBNT systems due to their non-obtrusive and low-cost implementation. BBI security, however, is a key concern during practical implementation since Internet connectivity exposes the interfaces to external threat vectors, and accurate classification of anomalous BBI traffic patterns is required to offer mitigation. However, parameter complexity and underlying intricate correlations among BBI traffic characteristics limit the use of existing machine-learning (ML) based anomaly detection methods typically requiring hand-crafted feature designing. To this end, the present work investigates the employment of deep learning (DL) algorithms allowing dynamic and scalable feature engineering to discriminate between normal and anomalous BBI traffic. During extensive validation using singular and multi-dimensional models on the generated dataset, our hybrid convolutional and recurrent ensemble (CNN + LSTM) reported an accuracy of approximately ~93.51% over other deep and shallow structures. Furthermore, employing a hybrid DL network allowed automated extraction of normal as well as temporal features in BBI data, eliminating manual selection and crafting of input features for accurate prediction. Finally, we recommend deployment primitives of the extracted optimal classifier in conventional intrusion detection systems as well as evolving non-Von Neumann architectures for real-time anomaly detection

    Quantized Non-Volatile Nanomagnetic Synapse based Autoencoder for Efficient Unsupervised Network Anomaly Detection

    Full text link
    In the autoencoder based anomaly detection paradigm, implementing the autoencoder in edge devices capable of learning in real-time is exceedingly challenging due to limited hardware, energy, and computational resources. We show that these limitations can be addressed by designing an autoencoder with low-resolution non-volatile memory-based synapses and employing an effective quantized neural network learning algorithm. We propose a ferromagnetic racetrack with engineered notches hosting a magnetic domain wall (DW) as the autoencoder synapses, where limited state (5-state) synaptic weights are manipulated by spin orbit torque (SOT) current pulses. The performance of anomaly detection of the proposed autoencoder model is evaluated on the NSL-KDD dataset. Limited resolution and DW device stochasticity aware training of the autoencoder is performed, which yields comparable anomaly detection performance to the autoencoder having floating-point precision weights. While the limited number of quantized states and the inherent stochastic nature of DW synaptic weights in nanoscale devices are known to negatively impact the performance, our hardware-aware training algorithm is shown to leverage these imperfect device characteristics to generate an improvement in anomaly detection accuracy (90.98%) compared to accuracy obtained with floating-point trained weights. Furthermore, our DW-based approach demonstrates a remarkable reduction of at least three orders of magnitude in weight updates during training compared to the floating-point approach, implying substantial energy savings for our method. This work could stimulate the development of extremely energy efficient non-volatile multi-state synapse-based processors that can perform real-time training and inference on the edge with unsupervised data

    Adaptive Biomimetic Neuronal Circuit System Based on Myelin Sheath Function

    Get PDF
    © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This is the accepted manuscript version of a conference paper which has been published in final form at https://doi.org/10.1109/TCE.2024.3356563Brain-inspired neuromorphic computing architectures are receiving significant attention in the consumer electronics field owing to their low power consumption, high computational capacity, and strong adaptability, where highly biomimetic circuit design is at the core of neuromorphic network research. Myelin sheaths are crucial cellular components in building stable circuits in biological neurons, capable of adaptively adjusting the conduction speed of neural signals. However, current research on neuronal circuits relies on simplified mathematical models and overlooks the adaptive functionality of myelin sheaths. This paper is based on the dynamic mechanism of myelination, utilizing physical devices such as memristors and voltage-controlled variable capacitors to simulate the physiological functions of myelin sheaths, and other organelles. Furthermore, adaptive biomimetic neuronal circuit system (ABNCS) is constructed by connecting various devices according to the physiological structure of neurons. PSpice simulations show that the ABNCS can adjust its parameters autonomously as the number of action potentials (APs) increase, which modifies the neuron’s activation criteria and firing rate. Through circuit experiments, PSpice simulations were further validated. Implementing myelin sheath functions in the neuronal circuit improves adaptability and reduces power consumption, and when combined with artificial synapses to construct neural networks, can form more stable neural circuits.Peer reviewe

    X-TIME: An in-memory engine for accelerating machine learning on tabular data with CAMs

    Full text link
    Structured, or tabular, data is the most common format in data science. While deep learning models have proven formidable in learning from unstructured data such as images or speech, they are less accurate than simpler approaches when learning from tabular data. In contrast, modern tree-based Machine Learning (ML) models shine in extracting relevant information from structured data. An essential requirement in data science is to reduce model inference latency in cases where, for example, models are used in a closed loop with simulation to accelerate scientific discovery. However, the hardware acceleration community has mostly focused on deep neural networks and largely ignored other forms of machine learning. Previous work has described the use of an analog content addressable memory (CAM) component for efficiently mapping random forests. In this work, we focus on an overall analog-digital architecture implementing a novel increased precision analog CAM and a programmable network on chip allowing the inference of state-of-the-art tree-based ML models, such as XGBoost and CatBoost. Results evaluated in a single chip at 16nm technology show 119x lower latency at 9740x higher throughput compared with a state-of-the-art GPU, with a 19W peak power consumption

    GraphR: Accelerating Graph Processing Using ReRAM

    Full text link
    This paper presents GRAPHR, the first ReRAM-based graph processing accelerator. GRAPHR follows the principle of near-data processing and explores the opportunity of performing massive parallel analog operations with low hardware and energy cost. The analog computation is suit- able for graph processing because: 1) The algorithms are iterative and could inherently tolerate the imprecision; 2) Both probability calculation (e.g., PageRank and Collaborative Filtering) and typical graph algorithms involving integers (e.g., BFS/SSSP) are resilient to errors. The key insight of GRAPHR is that if a vertex program of a graph algorithm can be expressed in sparse matrix vector multiplication (SpMV), it can be efficiently performed by ReRAM crossbar. We show that this assumption is generally true for a large set of graph algorithms. GRAPHR is a novel accelerator architecture consisting of two components: memory ReRAM and graph engine (GE). The core graph computations are performed in sparse matrix format in GEs (ReRAM crossbars). The vector/matrix-based graph computation is not new, but ReRAM offers the unique opportunity to realize the massive parallelism with unprecedented energy efficiency and low hardware cost. With small subgraphs processed by GEs, the gain of performing parallel operations overshadows the wastes due to sparsity. The experiment results show that GRAPHR achieves a 16.01x (up to 132.67x) speedup and a 33.82x energy saving on geometric mean compared to a CPU baseline system. Com- pared to GPU, GRAPHR achieves 1.69x to 2.19x speedup and consumes 4.77x to 8.91x less energy. GRAPHR gains a speedup of 1.16x to 4.12x, and is 3.67x to 10.96x more energy efficiency compared to PIM-based architecture.Comment: Accepted to HPCA 201

    Semiconductor Memory Devices for Hardware-Driven Neuromorphic Systems

    Get PDF
    This book aims to convey the most recent progress in hardware-driven neuromorphic systems based on semiconductor memory technologies. Machine learning systems and various types of artificial neural networks to realize the learning process have mainly focused on software technologies. Tremendous advances have been made, particularly in the area of data inference and recognition, in which humans have great superiority compared to conventional computers. In order to more effectively mimic our way of thinking in a further hardware sense, more synapse-like components in terms of integration density, completeness in realizing biological synaptic behaviors, and most importantly, energy-efficient operation capability, should be prepared. For higher resemblance with the biological nervous system, future developments ought to take power consumption into account and foster revolutions at the device level, which can be realized by memory technologies. This book consists of seven articles in which most recent research findings on neuromorphic systems are reported in the highlights of various memory devices and architectures. Synaptic devices and their behaviors, many-core neuromorphic platforms in close relation with memory, novel materials enabling the low-power synaptic operations based on memory devices are studied, along with evaluations and applications. Some of them can be practically realized due to high Si processing and structure compatibility with contemporary semiconductor memory technologies in production, which provides perspectives of neuromorphic chips for mass production

    Long-Term Memory for Cognitive Architectures: A Hardware Approach Using Resistive Devices

    Get PDF
    A cognitive agent capable of reliably performing complex tasks over a long time will acquire a large store of knowledge. To interact with changing circumstances, the agent will need to quickly search and retrieve knowledge relevant to its current context. Real time knowledge search and cognitive processing like this is a challenge for conventional computers, which are not optimised for such tasks. This thesis describes a new content-addressable memory, based on resistive devices, that can perform massively parallel knowledge search in the memory array. The fundamental circuit block that supports this capability is a memory cell that closely couples comparison logic with non-volatile storage. By using resistive devices instead of transistors in both the comparison circuit and storage elements, this cell improves area density by over an order of magnitude compared to state of the art CMOS implementations. The resulting memory does not need power to maintain stored information, and is therefore well suited to cognitive agents with large long-term memories. The memory incorporates activation circuits, which bias the knowledge retrieval process according to past memory access patterns. This is achieved by approximating the widely used base-level activation function using resistive devices to store, maintain and compare activation values. By distributing an instance of this circuit to every row in memory, the activation for all memory objects can be updated in parallel. A test using the word sense disambiguation task shows this circuit-based activation model only incurs a small loss in accuracy compared to exact base-level calculations. A variation of spreading activation can also be achieved in-memory. Memory objects are encoded with high-dimensional vectors that create association between correlated representations. By storing these high-dimensional vectors in the new content-addressable memory, activation can be spread to related objects during search operations. The new memory is scalable, power and area efficient, and performs operations in parallel that are infeasible in real-time for a sequential processor with a conventional memory hierarchy.Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 201
    corecore