1,944 research outputs found

    Hardware Considerations for Signal Processing Systems: A Step Toward the Unconventional.

    Full text link
    As we progress into the future, signal processing algorithms are becoming more computationally intensive and power hungry while the desire for mobile products and low power devices is also increasing. An integrated ASIC solution is one of the primary ways chip developers can improve performance and add functionality while keeping the power budget low. This work discusses ASIC hardware for both conventional and unconventional signal processing systems, and how integration, error resilience, emerging devices, and new algorithms can be leveraged by signal processing systems to further improve performance and enable new applications. Specifically this work presents three case studies: 1) a conventional and highly parallel mix signal cross-correlator ASIC for a weather satellite performing real-time synthetic aperture imaging, 2) an unconventional native stochastic computing architecture enabled by memristors, and 3) two unconventional sparse neural network ASICs for feature extraction and object classification. As improvements from technology scaling alone slow down, and the demand for energy efficient mobile electronics increases, such optimization techniques at the device, circuit, and system level will become more critical to advance signal processing capabilities in the future.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116685/1/knagphil_1.pd

    The prospects of quantum computing in computational molecular biology

    Full text link
    Quantum computers can in principle solve certain problems exponentially more quickly than their classical counterparts. We have not yet reached the advent of useful quantum computation, but when we do, it will affect nearly all scientific disciplines. In this review, we examine how current quantum algorithms could revolutionize computational biology and bioinformatics. There are potential benefits across the entire field, from the ability to process vast amounts of information and run machine learning algorithms far more efficiently, to algorithms for quantum simulation that are poised to improve computational calculations in drug discovery, to quantum algorithms for optimization that may advance fields from protein structure prediction to network analysis. However, these exciting prospects are susceptible to "hype", and it is also important to recognize the caveats and challenges in this new technology. Our aim is to introduce the promise and limitations of emerging quantum computing technologies in the areas of computational molecular biology and bioinformatics.Comment: 23 pages, 3 figure

    Robust learning algorithms for spiking and rate-based neural networks

    Get PDF
    Inspired by the remarkable properties of the human brain, the fields of machine learning, computational neuroscience and neuromorphic engineering have achieved significant synergistic progress in the last decade. Powerful neural network models rooted in machine learning have been proposed as models for neuroscience and for applications in neuromorphic engineering. However, the aspect of robustness is often neglected in these models. Both biological and engineered substrates show diverse imperfections that deteriorate the performance of computation models or even prohibit their implementation. This thesis describes three projects aiming at implementing robust learning with local plasticity rules in neural networks. First, we demonstrate the advantages of neuromorphic computations in a pilot study on a prototype chip. Thereby, we quantify the speed and energy consumption of the system compared to a software simulation and show how on-chip learning contributes to the robustness of learning. Second, we present an implementation of spike-based Bayesian inference on accelerated neuromorphic hardware. The model copes, via learning, with the disruptive effects of the imperfect substrate and benefits from the acceleration. Finally, we present a robust model of deep reinforcement learning using local learning rules. It shows how backpropagation combined with neuromodulation could be implemented in a biologically plausible framework. The results contribute to the pursuit of robust and powerful learning networks for biological and neuromorphic substrates

    Efficient emotion recognition using hyperdimensional computing with combinatorial channel encoding and cellular automata

    Full text link
    In this paper, a hardware-optimized approach to emotion recognition based on the efficient brain-inspired hyperdimensional computing (HDC) paradigm is proposed. Emotion recognition provides valuable information for human-computer interactions, however the large number of input channels (>200) and modalities (>3) involved in emotion recognition are significantly expensive from a memory perspective. To address this, methods for memory reduction and optimization are proposed, including a novel approach that takes advantage of the combinatorial nature of the encoding process, and an elementary cellular automaton. HDC with early sensor fusion is implemented alongside the proposed techniques achieving two-class multi-modal classification accuracies of >76% for valence and >73% for arousal on the multi-modal AMIGOS and DEAP datasets, almost always better than state of the art. The required vector storage is seamlessly reduced by 98% and the frequency of vector requests by at least 1/5. The results demonstrate the potential of efficient hyperdimensional computing for low-power, multi-channeled emotion recognition tasks

    Energy and Area Efficient Machine Learning Architectures using Spin-Based Neurons

    Get PDF
    Recently, spintronic devices with low energy barrier nanomagnets such as spin orbit torque-Magnetic Tunnel Junctions (SOT-MTJs) and embedded magnetoresistive random access memory (MRAM) devices are being leveraged as a natural building block to provide probabilistic sigmoidal activation functions for RBMs. In this dissertation research, we use the Probabilistic Inference Network Simulator (PIN-Sim) to realize a circuit-level implementation of deep belief networks (DBNs) using memristive crossbars as weighted connections and embedded MRAM-based neurons as activation functions. Herein, a probabilistic interpolation recoder (PIR) circuit is developed for DBNs with probabilistic spin logic (p-bit)-based neurons to interpolate the probabilistic output of the neurons in the last hidden layer which are representing different output classes. Moreover, the impact of reducing the Magnetic Tunnel Junction\u27s (MTJ\u27s) energy barrier is assessed and optimized for the resulting stochasticity present in the learning system. In p-bit based DBNs, different defects such as variation of the nanomagnet thickness can undermine functionality by decreasing the fluctuation speed of the p-bit realized using a nanomagnet. A method is developed and refined to control the fluctuation frequency of the output of a p-bit device by employing a feedback mechanism. The feedback can alleviate this process variation sensitivity of p-bit based DBNs. This compact and low complexity method which is presented by introducing the self-compensating circuit can alleviate the influences of process variation in fabrication and practical implementation. Furthermore, this research presents an innovative image recognition technique for MNIST dataset on the basis of p-bit-based DBNs and TSK rule-based fuzzy systems. The proposed DBN-fuzzy system is introduced to benefit from low energy and area consumption of p-bit-based DBNs and high accuracy of TSK rule-based fuzzy systems. This system initially recognizes the top results through the p-bit-based DBN and then, the fuzzy system is employed to attain the top-1 recognition results from the obtained top outputs. Simulation results exhibit that a DBN-Fuzzy neural network not only has lower energy and area consumption than bigger DBN topologies while also achieving higher accuracy

    Deep learning : enhancing the security of software-defined networks

    Get PDF
    Software-defined networking (SDN) is a communication paradigm that promotes network flexibility and programmability by separating the control plane from the data plane. SDN consolidates the logic of network devices into a single entity known as the controller. SDN raises significant security challenges related to its architecture and associated characteristics such as programmability and centralisation. Notably, security flaws pose a risk to controller integrity, confidentiality and availability. The SDN model introduces separation of the forwarding and control planes. It detaches the control logic from switching and routing devices, forming a central plane or network controller that facilitates communications between applications and devices. The architecture enhances network resilience, simplifies management procedures and supports network policy enforcement. However, it is vulnerable to new attack vectors that can target the controller. Current security solutions rely on traditional measures such as firewalls or intrusion detection systems (IDS). An IDS can use two different approaches: signature-based or anomaly-based detection. The signature-based approach is incapable of detecting zero-day attacks, while anomaly-based detection has high false-positive and false-negative alarm rates. Inaccuracies related to false-positive attacks may have significant consequences, specifically from threats that target the controller. Thus, improving the accuracy of the IDS will enhance controller security and, subsequently, SDN security. A centralised network entity that controls the entire network is a primary target for intruders. The controller is located at a central point between the applications and the data plane and has two interfaces for plane communications, known as northbound and southbound, respectively. Communications between the controller, the application and data planes are prone to various types of attacks, such as eavesdropping and tampering. The controller software is vulnerable to attacks such as buffer and stack overflow, which enable remote code execution that can result in attackers taking control of the entire network. Additionally, traditional network attacks are more destructive. This thesis introduces a threat detection approach aimed at improving the accuracy and efficiency of the IDS, which is essential for controller security. To evaluate the effectiveness of the proposed framework, an empirical study of SDN controller security was conducted to identify, formalise and quantify security concerns related to SDN architecture. The study explored the threats related to SDN architecture, specifically threats originating from the existence of the control plane. The framework comprises two stages, involving the use of deep learning (DL) algorithms and clustering algorithms, respectively. DL algorithms were used to reduce the dimensionality of inputs, which were forwarded to clustering algorithms in the second stage. Features were compressed to a single value, simplifying and improving the performance of the clustering algorithm. Rather than using the output of the neural network, the framework presented a unique technique for dimensionality reduction that used a single value—reconstruction error—for the entire input record. The use of a DL algorithm in the pre-training stage contributed to solving the problem of dimensionality related to k-means clustering. Using unsupervised algorithms facilitated the discovery of new attacks. Further, this study compares generative energy-based models (restricted Boltzmann machines) with non-probabilistic models (autoencoders). The study implements TensorFlow in four scenarios. Simulation results were statistically analysed using a confusion matrix, which was evaluated and compared with similar related works. The proposed framework, which was adapted from existing similar approaches, resulted in promising outcomes and may provide a robust prospect for deployment in modern threat detection systems in SDN. The framework was implemented using TensorFlow and was benchmarked to the KDD99 dataset. Simulation results showed that the use of the DL algorithm to reduce dimensionality significantly improved detection accuracy and reduced false-positive and false-negative alarm rates. Extensive simulation studies on benchmark tasks demonstrated that the proposed framework consistently outperforms all competing approaches. This improvement is a further step towards the development of a reliable IDS to enhance the security of SDN controllers

    Can deep-sub-micron device noise be used as the basis for probabilistic neural computation?

    Get PDF
    This thesis explores the potential of probabilistic neural architectures for computation with future nanoscale Metal-Oxide-Semiconductor Field Effect Transistors (MOSFETs). In particular, the performance of a Continuous Restricted Boltzmann Machine {CRBM) implemented with generated noise of Random Telegraph Signal (RTS) and 1/ f form has been studied with reference to the 'typical' Gaussian implementation. In this study, a time domain RTS based noise analysis capability has been developed based upon future nanoscale MOSFETs, to represent the effect of nanoscale MOSFET noise on circuit implementation in particular the synaptic analogue multiplier which is subsequently used to implement stochastic behaviour of the CRBM. The result of this thesis indicates little degradation in performance from that of the typical Gaussian CRBM. Through simulation experiments, the CRBM with nanoscale MOSFET noise shows the ability to reconstruct training data, although it takes longer to converge to equilibrium. The results in this thesis do not prove that nanoscale MOSFET noise can be exploited in all contexts and with all data, for probabilistic computation. However, the result indicates, for the first time, that nanoscale MOSFET noise has the potential to be used for probabilistic neural computation hardware implementation. This thesis thus introduces a methodology for a form of technology-downstreaming and highlights the potential of probabilistic architecture for computation with future nanoscale MOSFETs
    corecore