216 research outputs found

    Advanced Computing and Related Applications Leveraging Brain-inspired Spiking Neural Networks

    Full text link
    In the rapid evolution of next-generation brain-inspired artificial intelligence and increasingly sophisticated electromagnetic environment, the most bionic characteristics and anti-interference performance of spiking neural networks show great potential in terms of computational speed, real-time information processing, and spatio-temporal information processing. Data processing. Spiking neural network is one of the cores of brain-like artificial intelligence, which realizes brain-like computing by simulating the structure and information transfer mode of biological neural networks. This paper summarizes the strengths, weaknesses and applicability of five neuronal models and analyzes the characteristics of five network topologies; then reviews the spiking neural network algorithms and summarizes the unsupervised learning algorithms based on synaptic plasticity rules and four types of supervised learning algorithms from the perspectives of unsupervised learning and supervised learning; finally focuses on the review of brain-like neuromorphic chips under research at home and abroad. This paper is intended to provide learning concepts and research orientations for the peers who are new to the research field of spiking neural networks through systematic summaries

    The Morse Code Room: Applicability of the Chinese Room Argument to Spiking Neural Networks

    Get PDF
    The Chinese room argument (CRA) was first stated in 1980. Since then computer technologies have improved and today spiking neural networks (SNNs) are “arguably the only viable option if one wants to understand how the brain computes.” (Tavanei et.al. 2019: 47) SNNs differ in various important respects from the digital computers the CRA was directed against. The objective of the present work is to explore whether the CRA applies to SNNs. In the first chapter I am going to discuss computationalism, the Chinese room argument and give a brief overview over spiking neural networks. The second chapter is going to be considered with five important differences between SNNs and digital computers: (1) Massive parallelism, (2) subsymbolic computation, (3) machine learning, (4) analogue representation and (5) temporal encoding. I am going to finish by concluding that, besides minor limitations, the Chinese room argument can be applied to spiking neural networks.:1 Introduction 2 Theoretical background 2.I Strong AI: Computationalism 2.II The Chinese room argument 2.III Spiking neural networks 3 Applicability to spiking neural networks 3.I Massive parallelism 3.II Subsymbolic computation 3.III Machine learning 3.IV Analogue representation 3.V Temporal encoding 3.VI The Morse code room and its replies 3.VII Some more general considerations regarding hardware and software 4 ConclusionDas Argument vom chinesischen Zimmer wurde erstmals 1980 veröffentlicht. Seit dieser Zeit hat sich die Computertechnologie stark weiterentwickelt und die heute viel beachteten gepulsten neuronalen Netze ähneln stark dem Aufbau und der Arbeitsweise biologischer Gehirne. Gepulste neuronale Netze unterscheiden sich in verschiedenen wichtigen Aspekten von den digitalen Computern, gegen die die CRA gerichtet war. Das Ziel der vorliegenden Arbeit ist es, zu untersuchen, ob das Argument vom chinesischen Zimmer auf gepulste neuronale Netze anwendbar ist. Im ersten Kapitel werde ich den Computer-Funktionalismus und das Argument des chinesischen Zimmers erörtern und einen kurzen Überblick über gepulste neuronale Netze geben. Das zweite Kapitel befasst sich mit fünf wichtigen Unterschieden zwischen gepulsten neuronalen Netzen und digitalen Computern: (1) Massive Parallelität, (2) subsymbolische Berechnung, (3) maschinelles Lernen, (4) analoge Darstellung und (5) zeitliche Kodierung. Ich werde schlussfolgern, dass das Argument des chinesischen Zimmers, abgesehen von geringfügigen Einschränkungen, auf gepulste neuronale Netze angewendet werden kann.:1 Introduction 2 Theoretical background 2.I Strong AI: Computationalism 2.II The Chinese room argument 2.III Spiking neural networks 3 Applicability to spiking neural networks 3.I Massive parallelism 3.II Subsymbolic computation 3.III Machine learning 3.IV Analogue representation 3.V Temporal encoding 3.VI The Morse code room and its replies 3.VII Some more general considerations regarding hardware and software 4 Conclusio

    Runtime Construction of Large-Scale Spiking Neuronal Network Models on GPU Devices

    Get PDF
    Simulation speed matters for neuroscientific research: this includes not only how quickly the simulated model time of a large-scale spiking neuronal network progresses but also how long it takes to instantiate the network model in computer memory. On the hardware side, acceleration via highly parallel GPUs is being increasingly utilized. On the software side, code generation approaches ensure highly optimized code at the expense of repeated code regeneration and recompilation after modifications to the network model. Aiming for a greater flexibility with respect to iterative model changes, here we propose a new method for creating network connections interactively, dynamically, and directly in GPU memory through a set of commonly used high-level connection rules. We validate the simulation performance with both consumer and data center GPUs on two neuroscientifically relevant models: a cortical microcircuit of about 77,000 leaky-integrate-and-fire neuron models and 300 million static synapses, and a two-population network recurrently connected using a variety of connection rules. With our proposed ad hoc network instantiation, both network construction and simulation times are comparable or shorter than those obtained with other state-of-the-art simulation technologies while still meeting the flexibility demands of explorative network modeling

    Close to the metal: Towards a material political economy of the epistemology of computation

    Get PDF
    This paper investigates the role of the materiality of computation in two domains: blockchain technologies and artificial intelligence (AI). Although historically designed as parallel computing accelerators for image rendering and videogames, graphics processing units (GPUs) have been instrumental in the explosion of both cryptoasset mining and machine learning models. The political economy associated with video games and Bitcoin and Ethereum mining provided a staggering growth in performance and energy efficiency and this, in turn, fostered a change in the epistemological understanding of AI: from rules-based or symbolic AI towards the matrix multiplications underpinning connectionism, machine learning and neural nets. Combining a material political economy of markets with a material epistemology of science, the article shows that there is no clear-cut division between software and hardware, between instructions and tools, and between frameworks of thought and the material and economic conditions of possibility of thought itself. As the microchip shortage and the growing geopolitical relevance of the hardware and semiconductor supply chain come to the fore, the paper invites social scientists to engage more closely with the materialities and hardware architectures of ‘virtual’ algorithms and software

    Design and Implementation of FPGA-based Hardware Accelerator for Bayesian Confidence Propagation Neural Network

    Get PDF
    The Bayesian confidence propagation neural network (BCPNN) has been widely used for neural computation and machine learning domains. However, the current implementations of BCPNN are not computationally efficient enough, especially in the update of synaptic state variables. This thesis proposes a hardware accelerator for the training and inference process of BCPNN. In the hardware design, several techniques are employed, including a hybrid update mechanism, customized LUT-based design for exponential operations, and optimized design that maximizes parallelism. The proposed hardware accelerator is implemented on an FPGA device. The results show that the computing speed of the accelerator can improve the CPU counterpart by two orders of magnitude. In addition, the computational modules of the accelerator can be reused to reduce hardware overheads while achieving comparable computing performance. The accelerator's potential to facilitate the efficient implementation for large-scale BCPNN neural networks opens up the possibility to realize higher-level cognitive phenomena, such as associative memory and working memory

    Neuromorphic Computing between Reality and Future Needs

    Get PDF
    Neuromorphic computing is a one of computer engineering methods that to model their elements as the human brain and nervous system. Many sciences as biology, mathematics, electronic engineering, computer science and physics have been integrated to construct artificial neural systems. In this chapter, the basics of Neuromorphic computing together with existing systems having the materials, devices, and circuits. The last part includes algorithms and applications in some fields

    Jornadas Nacionales de Investigación en Ciberseguridad: actas de las VIII Jornadas Nacionales de Investigación en ciberseguridad: Vigo, 21 a 23 de junio de 2023

    Get PDF
    Jornadas Nacionales de Investigación en Ciberseguridad (8ª. 2023. Vigo)atlanTTicAMTEGA: Axencia para a modernización tecnolóxica de GaliciaINCIBE: Instituto Nacional de Cibersegurida

    Intelligent computing : the latest advances, challenges and future

    Get PDF
    Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence and internet-of-things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: intelligent computing is not only intelligence-oriented but also intelligence-driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing

    Cyber Threat Intelligence based Holistic Risk Quantification and Management

    Get PDF

    A Survey of Spiking Neural Network Accelerator on FPGA

    Full text link
    Due to the ability to implement customized topology, FPGA is increasingly used to deploy SNNs in both embedded and high-performance applications. In this paper, we survey state-of-the-art SNN implementations and their applications on FPGA. We collect the recent widely-used spiking neuron models, network structures, and signal encoding formats, followed by the enumeration of related hardware design schemes for FPGA-based SNN implementations. Compared with the previous surveys, this manuscript enumerates the application instances that applied the above-mentioned technical schemes in recent research. Based on that, we discuss the actual acceleration potential of implementing SNN on FPGA. According to our above discussion, the upcoming trends are discussed in this paper and give a guideline for further advancement in related subjects
    corecore