18 research outputs found

    Predicting voluntary movements from motor cortical activity with neuromorphic hardware

    Get PDF
    This document is the Accepted Manuscript version of the following article: A. Lungu, A. Riehle, M. P. Nawrot and M. Schmuker, "Predicting voluntary movements from motor cortical activity with neuromorphic hardware," in IBM Journal of Research and Development, Vol. 61, no. 2/3, pp. 5:1-5:12, March-May 1 2017. The version of record is available online at doi: 10.1147/JRD.2017.2656063. © 2017 by International Business Machines Corporation. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Neurons in the mammalian motor cortices encode physical parameters of voluntary movements during planning and execution of a motor task. Brain-machine interfaces can decode limb movements from the activity of these neurons in real time. The future goal is to control prosthetic devices in severely paralyzed patients or to restore communication if the ability to speak or make gestures is lost. Here, we implemented a spiking neural network that decodes movement intentions from individual neuronal activity recorded in the motor cortex of a monkey. The network runs on neuromorphic hardware and performs its computations in a purely spike-based fashion. It incorporates an insect-brain-inspired, three-layer architecture with 176 neurons. Cortical signals are filtered using lateral inhibition, and the network is trained in a supervised fashion to predict two opposing directions of the monkey’s arm reaching movement before the movement is carried out. Our network operates on the actual spikes that have been emitted by motor cortical neurons, without the need to construct intermediate non-spiking representations. Using a pseudo-population of 12 manually-selected neurons, it reliably predicts the movement direction with an accuracy of 89.32 % on unseen data after only 100 training trials. Our results provide a proof of concept for the first-time use of a neuromorphic device for decoding movement intentions.Peer reviewe

    Soil phosphorus and potassium solubilization in an experiment with field crops in the Great Brăila Island

    Get PDF
    Along with nitrogen, phosphorus and potassium are the most important nutrition elements for plants; they are to be found in all their organs, are components of the needed substances for vital processes, and have important roles in many biochemical reactions. Accessible fractions for plant nutrition are but small fractions of the total phosphorus and potassium soil contents. The influence of soil reaction (pH), humus content, and total forms upon phosphorus and potassium solubilization in the ammonium acetate lactate solution at pH 3.7, down to a 50 cm soil depth, was studied in an agro-chemical experiment carried out in six farms of the Great Brăila Island, with seven field crops diversely fertilized with nitrogen, phosphorus, sulphur, and – for only one of the crops and in small quantities – potassium; phosphorus available contents for plants were also computed as in neutral – slightly alkaline soils as it is the case they are not the same with the contents analytically determined in the used extractant. Phosphorus and potassium solubilization degrees were very significantly influenced by soil reaction and in the case of potassium by the organic matter content too. Because of the neutral – slightly alkaline soil reaction phosphorus soluble in the ammonium acetate lactate solution and the one available for plants, found out by computing, were differently influenced. Effects registered under each crop were very significant for phosphorus and less for potassium following the diverse fertilization systems

    Green Accounting in Romania - a Vision to European Integration

    Get PDF
    The paper debates solutions, points of view and a commune language for Green Accounting. The main purposes of our research are the following: 1.Define the object of Green Accounting 2.Scope 3.Theory and specific practices 4.Disclosure and financial analysis 5.Romanian experience in Green Accounting. How to define Green Accounting? Is Green Accounting a part of Environmental Accounting? How to ensure the balance between business interests and envinronmental protection? Are environmental goals based on Total Quality Management? How to design for Environment? This are some questions proposed to be discused in this paper.

    Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

    Full text link
    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications

    NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps

    Get PDF
    Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though Graphical Processing Units (GPUs) are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from 1x1 to 7x7. NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq FPGA platform and present results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Post-synthesis simulations using Mentor Modelsim in a 28nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the MAC units, and achieves a power efficiency of over 3TOp/s/W in a core area of 6.3mm2^2. As further proof of NullHop's usability, we interfaced its FPGA implementation with a neuromorphic event camera for real time interactive demonstrations

    Incremental Learning of Hand Symbols Using Event-Based Cameras

    Full text link
    Conventional cameras create redundant output especially when the frame rate is high. Dynamic vision sensors (DVSs), on the other hand, generate asynchronous and sparse brightness change events only when an object in the field of view is in motion. Such event-based output can be processed as a 1D time sequence, or it can be converted to 2D frames that resemble conventional camera frames. Frames created, e.g., by accumulating a fixed number of events, can be used as input for conventional deep learning algorithms, thus upgrading existing computer vision pipelines through low-power, low-redundancy sensors. This paper describes a hand symbol recognition system that can quickly be trained to incrementally learn new symbols recorded with an event-based camera, without forgetting previously learned classes. By using the iCaRL incremental learning algorithm, we show that we can learn up to 16 new symbols using only 4000 samples for each symbol and achieving a final symbol accuracy of over 80%. The system achieves latency of under 0.5s and training requires 3 minutes for 5 epochs on an NVIDIA 1080TI GPU

    Siamese Networks for Few-Shot Learning on Edge Embedded Devices

    Full text link
    Edge artificial intelligence hardware targets mainly inference networks that have been pretrained on massive datasets. The field of few-shot learning looks for methods that allow a network to produce high accuracy even when only a few samples of each class are available. Siamese networks can be used to tackle few-shot learning problems and are unique because they do not require retraining on the new samples of the new classes. Therefore they are suitable for edge hardware accelerators which often do not include on-chip training capabilities. This work describes improvements to a baseline Siamese network and benchmarking of the improved network on edge platforms. The modifications to the baseline network included adding multi-resolution kernels, a hybrid training process as well a different embedding similarity computation method. This network shows an average accuracy improvement of up to 22% across 4 datasets in a 5-way, 1-shot classification task. Benchmarking results using three edge computing platforms (NVIDIA Jetson Nano, Coral Edge TPU and a custom convolutional neural network accelerator) show that a Siamese classifier can run on these devices at reasonable frame rates for real-time performance, between 3 frames per second (FPS) on Jetson Nano and 60 FPS on the Edge TPU. By increasing the weight sparsity during training, the inference time of a network with 25% weight sparsity increases by 10 FPS but with only 1% drop in accuracy

    Live demonstration: Convolutional neural network driven by dynamic vision sensor playing RoShamBo

    Full text link
    This demonstration presents a convolutional neural network (CNN) playing “RoShamBo” (“rock-paper-scissors”) against human opponents in real time. The network is driven by dynamic and active-pixel vision sensor (DAVIS) events, acquired by accumulating events into fixed event-number frames

    Fast event-driven incremental learning of hand symbols

    Full text link
    This paper describes a hand symbol recognition system that can quickly be trained to incrementally learn to recognize new symbols using about 100 times less data and time than by using conventional training. It is driven by frames from a Dynamic Vision Sensor (DVS) event camera. Conventional cameras have very redundant output, especially at high frame rates. Dynamic vision sensors output sparse and asynchronous brightness change events that occur when an object or the camera is moving. Images consisting of a fixed number of events from a DVS drive recognition and incremental learning of new hand symbols in the context of a RoShamBo (rock-paper-scissors) demonstration. Conventional training on the original RoShamBo dataset requires about 12.5h compute time on a desktop GPU using the 2.5 million images in the base dataset. Novel symbols that a user shows for a few tens of seconds to the system can be learned on-the-fly using the iCaRL incremental learning algorithm with 3 minutes of training time on a desktop GPU, while preserving recognition accuracy of previously trained symbols. Our system runs a residual network with 32 layers and maintains 88.4% after 100 epochs or 77% after 5 epochs overall accuracy after 4 incremental training stages. Each stage adds an additional 2 novel symbols to the base 4 symbols. The paper also reports an inexpensive robot hand used for live demonstrations of the base RoShamBo game

    Cryptocurrencies’ Impact on Accounting: Bibliometric Review

    No full text
    This bibliometric study explores the cryptocurrency accounting (CA) literature and the connections between authors, institutions, and countries where cryptocurrency activity involves transactions that must be legally recognized in accounting, ensure accuracy and reliability for auditing, and adhere to tax compliance. The design involves the selection of data from Web of Science Core Collection (WoS) and Scopus, published between 2007 and 2023. The technique helps identify influential publications, collaboration networks, thematic clusters, and trends in research on CA using tools VOSviewer, Biblioshiny, and MS Excel. The originality of the study lies in its dual role as a support for accounting professionals and academics to develop innovative solutions for the challenges posed by crypto technology across core accounting areas: financial and managerial accounting, taxation, and auditing. The findings offer insights into the themes mentioned, and even if the collaboration between the authors is not very developed, the innovation and public recognition of the subject could raise researchers’ interest. The limitation of the dataset is that it does not cover all relevant publications in a different period from the one in which the data were retrieved, 9–11 May 2024. This review might need periodic updates because the CA landscape is constantly changing
    corecore