13 research outputs found

    Spiking Neural Networks for Computational Intelligence:An Overview

    Get PDF
    Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This could be because the benchmarking techniques for SNNs are based on the methods used for evaluating deep neural networks, which do not provide a clear evaluation of the capabilities of SNNs. Particularly, the benchmarking of SNN approaches with regards to energy efficiency and latency requires realization in suitable hardware, which imposes additional temporal and resource constraints upon ongoing projects. This review aims to provide an overview of the current real-world applications of SNNs and identifies steps to accelerate research involving SNNs in the future

    Empirical Comparison of Distributed Source Localization Methods for Single-Trial Detection of Movement Preparation

    Get PDF
    The development of technologies for the treatment of movement disorders, like stroke, is still of particular interest in brain-computer interface (BCI) research. In this context, source localization methods (SLMs), that reconstruct the cerebral origin of brain activity measured outside the head, e.g., via electroencephalography (EEG), can add a valuable insight into the current state and progress of the treatment. However, in BCIs SLMs were often solely considered as advanced signal processing methods that are compared against other methods based on the classification performance alone. Though, this approach does not guarantee physiological meaningful results. We present an empirical comparison of three established distributed SLMs with the aim to use one for single-trial movement prediction. The SLMs wMNE, sLORETA, and dSPM were applied on data acquired from eight subjects performing voluntary arm movements. Besides the classification performance as quality measure, a distance metric was used to asses the physiological plausibility of the methods. For the distance metric, which is usually measured to the source position of maximum activity, we further propose a variant based on clusters that is better suited for the single-trial case in which several sources are likely and the actual maximum is unknown. The two metrics showed different results. The classification performance revealed no significant differences across subjects, indicating that all three methods are equally well-suited for single-trial movement prediction. On the other hand, we obtained significant differences in the distance measure, favoring wMNE even after correcting the distance with the number of reconstructed clusters. Further, distance results were inconsistent with the traditional method using the maximum, indicating that for wMNE the point of maximum source activity often did not coincide with the nearest activation cluster. In summary, the presented comparison might help users to select an appropriate SLM and to understand the implications of the selection. The proposed methodology pays attention to the particular properties of distributed SLMs and can serve as a framework for further comparisons

    Power efficient machine learning-based hardware architectures for biomedical applications

    Get PDF
    The future of critical health diagnosis will involve intelligent and smart devices that are low-cost, wearable, and lightweight, requiring low-power, energy-efficient hardware platforms. Various machine learning models, such as deep learning architectures, have been employed to design intelligent healthcare systems. However, deploying these sophisticated and intelligent devices in real-time embedded systems with limited hardware resources and power budget is complex due to the requirement of high computational power in achieving a high accuracy rate. As a result, this creates a significant gap between the advancement of computing technology and the associated device technologies for healthcare applications. Power-efficient machine learning-based digital hardware design techniques have been introduced in this work for the realization of a compact design solution while maintaining optimal prediction accuracy. Two hardware design approaches, DeepSAC and SABiNN have been proposed and analyzed in this work. DeepSAC is a shift-accumulator-based technique, whereas SABiNN is a 2's complement-based binarized digital hardware technique. Neural network models, such as feedforward, convolutional neural nets, residual networks, and other popular machine learning and deep neural networks, are selected to benchmark the proposed model architecture. Various deep compression learning techniques, such as pruning, n-bit (n = 8,16) integer quantization, and binarization on hyper-parameters, are also employed. These models significantly reduced the power consumption rate by 5x, size by 13x, and improved the model latency. For efficient use of these models, especially in biomedical applications, a sleep apnea (SA) detection device for adults is developed to detect SA events in real-time. The input to the system consists of two physiological sensor data, such as ECG signal from the chest movement and SpO2 measurement from the pulse oximeter to predict the occurrence of SA episodes. In the training phase, actual patient data is used, and the network model is converted into the proposed hardware models to achieve medically backed accuracy. After achieving acceptable results of 88 percent accuracy, all the parameters are extracted for inference on edge. In the inference phase, reconfigurable hardware validated the extracted parameter for model precision and power consumption rate before being translated onto the silicon. This research implements the final model in CMOS platforms using 130 nm and 180 nm commercial CMOS processes.Includes bibliographical references

    Proceedings of the 19th Sound and Music Computing Conference

    Get PDF
    Proceedings of the 19th Sound and Music Computing Conference - June 5-12, 2022 - Saint-Étienne (France). https://smc22.grame.f

    Bioinspired metaheuristic algorithms for global optimization

    Get PDF
    This paper presents concise comparison study of newly developed bioinspired algorithms for global optimization problems. Three different metaheuristic techniques, namely Accelerated Particle Swarm Optimization (APSO), Firefly Algorithm (FA), and Grey Wolf Optimizer (GWO) are investigated and implemented in Matlab environment. These methods are compared on four unimodal and multimodal nonlinear functions in order to find global optimum values. Computational results indicate that GWO outperforms other intelligent techniques, and that all aforementioned algorithms can be successfully used for optimization of continuous functions

    Virginia Commonwealth University Courses

    Get PDF
    Listing of courses for the 2022-2023 year

    Virginia Commonwealth University Courses

    Get PDF
    Listing of courses for the 2021-2022 year

    Experimental Evaluation of Growing and Pruning Hyper Basis Function Neural Networks Trained with Extended Information Filter

    Get PDF
    In this paper we test Extended Information Filter (EIF) for sequential training of Hyper Basis Function Neural Networks with growing and pruning ability (HBF-GP). The HBF neuron allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The main intuition behind HBF is in generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. We exploit concept of neuron’s significance and allow growing and pruning of HBF neurons during sequential learning process. From engineer’s perspective, EIF is attractive for training of neural networks because it allows a designer to have scarce initial knowledge of the system/problem. Extensive experimental study shows that HBF neural network trained with EIF achieves same prediction error and compactness of network topology when compared to EKF, but without the need to know initial state uncertainty, which is its main advantage over EKF
    corecore