174 research outputs found

    SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence

    Full text link
    Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties. As the emerging spiking deep learning paradigm attracts increasing interest, traditional programming frameworks cannot meet the demands of the automatic differentiation, parallel computation acceleration, and high integration of processing neuromorphic datasets and deployment. In this work, we present the SpikingJelly framework to address the aforementioned dilemma. We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips. Compared to existing methods, the training of deep SNNs can be accelerated 11×11\times, and the superior extensibility and flexibility of SpikingJelly enable users to accelerate custom models at low costs through multilevel inheritance and semiautomatic code generation. SpikingJelly paves the way for synthesizing truly energy-efficient SNN-based machine intelligence systems, which will enrich the ecology of neuromorphic computing.Comment: Accepted in Science Advances (https://www.science.org/doi/10.1126/sciadv.adi1480

    Algorithm Hardware Codesign for High Performance Neuromorphic Computing

    Get PDF
    Driven by the massive application of Internet of Things (IoT), embedded system and Cyber Physical System (CPS) etc., there is an increasing demand to apply machine intelligence on these power limited scenarios. Though deep learning has achieved impressive performance on various realistic and practical tasks such as anomaly detection, pattern recognition, machine vision etc., the ever-increasing computational complexity and model size of Deep Neural Networks (DNN) make it challenging to deploy them onto aforementioned scenarios where computation, memory and energy resource are all limited. Early studies show that biological systems\u27 energy efficiency can be orders of magnitude higher than that of digital systems. Hence taking inspiration from biological systems, neuromorphic computing and Spiking Neural Network (SNN) have drawn attention as alternative solutions for energy-efficient machine intelligence. Though believed promising, neuromorphic computing are hardly used for real world applications. A major problem is that the performance of SNN is limited compared with DNNs due to the lack of efficient training algorithm. In SNN, neuron\u27s output is spike, which is represented by Dirac Delta function mathematically. Becauase of the non-differentiable nature of spike, gradient descent cannot be directly used to train SNN. Hence algorithm level innovation is desirable. Next, as an emerging computing paradigm, hardware and architecture level innovation is also required to support new algorithms and to explore the potential of neuromorphic computing. In this work, we present a comprehensive algorithm-hardware codesign for neuromorphic computing. On the algorithm side, we address the training difficulty. We first derive a flexible SNN model that retains critical neural dynamics, and then develop algorithm to train SNN to learn temporal patterns. Next, we apply proposed algorithm to multivariate time series classification tasks to demonstrate its advantages. On hardware level, we develop a systematic solution on FPGA that is optimized for proposed SNN model to enable high performance inference. In addition, we also explore emerging devices, a memristor-based neuromorphic design is proposed. We carry out a neuron and synapse circuit which can replicate the important neural dynamics such as filter effect and adaptive threshold

    Brain fame:From FPGA to heterogeneous acceleration of brain simulations

    Get PDF
    Among the various methods in neuroscience for understanding brain function, in-silico simulations have been gaining popularity. Advances in neuroscience and engineering led to the creation of mathematical models of networks that do not simply mimic biological behaviour in an abstract fashion but emulate its in significant detail, even to the level of its biophysical properties. Such an example is the Spiking Neural Network (SNN) that can model a variety of additional behavioural features, like encoding data and adapting according to a spike train`s amplitude, frequency and general precise pattern of arrival of spiking events on a neuron. As a result, SNNs have higher explanatory power than their predecessors, thus brain simulations based on SNNs become an attractive topic to explore. In-silico simulations of SNNs can have beneficial results not only for neuroscience research but breakthroughs can also potentially benefit medical, computing and A.I. research. SNNs, though, computationally depending workloads that traditional computing might not be able to cover. Thus, the use of High Performance Computing (HPC) platforms in this application domain becomes desirable. This dissertation explores the topic of HPC-based in-silico brain simulations. Initially, the effort focuses on custom hardware accelerators, due to their potential in providing real-time performance alongside support for large-scale non-real-time experiments and specifically Field Programmable Gate Arrays (FPGAs). The nature of FPGA-based accelerators provides specific benefits against other similar paradigms like Application Specific Integrated Circuit (ASIC) designs.Firstly, we explore the general characteristics of typical SNNs model types to identify their computational requirements in relation to their explanatory strength. We also identify major design characteristics in model development that can directly affect its performance and behaviour when ported to an HPC platform. Subsequently, a detailed literature review is made on FPGA-based SNN implementations. The HPC porting effort begins with the implementation of an extended-Hodgkin-Huxley model of the Inferior-olivary nucleus featuring advanced connectivity. The model is quite demanding and complex enough to act as a realistic benchmark for HPC implementations, while also being scientifically relevant in its own right. FPGA development shows promising performance results not only when doing custom designs but also using High-level synthesis (HLS) toolflows that significantly reduce development time. FPGAs have proven suitable for small-scale embedded-HPC uses as well. The various efforts, though, reveal a very specific weakness of FPGA development that has less to do with the silicon itself and more with its programming environment. The FPGA tools are very inaccessible to non-experts, thus any acceleration effort would require the engineer (and the FPGA development time) to be in the critical path of the research process. An important question to be answered is how the FPGA platform would compare to other popular software-based HPC solutions such as GPU- and CPU-based platforms. A detailed comparison of the best FPGA implementation with GPU and manycore-CPU ports of the same benchmark is conducted. The comparison and evaluation shows that, when it comes to real-time performance, FPGAs have a clear advantage. But for non-real-time, large scale simulations, there is no single platform that can optimally support the complete range of experiments that could be conducted with the inferior olive model. The comparison makes a clear case for BrainFrame, a platform that supports heterogeneous HPC substrates. This dissertation, thus, concludes with the proposal of the BrainFrame system. The proof-of-concept design supports standard and extended Hodgkin-Huxley models, , such as the original inferior-olive model. The system integrates a GPU-, CPU- and FPGA-based HPC back-end while also using a standard neuroscientific language front-end (PyNN) that can score best-in-class performance, alleviate some of the development hurdles and make it far more user-friendly for the typical model developer. Additionally, the multi-node potential of the platform is being explored. BrainFrame provides both a powerful heterogeneous platform for acceleration and also a front-end familiar to the neuroscientist

    Simulation Intelligence: Towards a New Generation of Scientific Methods

    Full text link
    The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science

    Model Order Reduction for Modeling the Brain

    Get PDF
    Tässä väitöskirjassa tutkimme Model Order Reduction (MOR) -menetelmien käyttöä aivosimulaatioiden vaatimien laskentaresurssien pienentämiseksi ja laskenta-ajan nopeuttamiseksi. Matemaattinen mallintaminen ja numeeriset menetelmät, kuten simulaatiot, ovat tärkeimpiä työkaluja laskennallisessa neurotieteessä, jossa pyritään ymmärtämään aivojen toimintaa dataa ja teoriaa yhdistämällä. Aivosolujen ja niiden muodostamien soluverkostojen monimutkaisuudesta johtuen tietokonesimulaatiot eivät voi sisältää kaikkia biologisesti realistisia yksityiskohtia. MOR-menetelmiä käyttäen johdamme redusoituja malleja ja näytämme, että niillä on mahdollista approksimoida hermosoluverkostomalleja. Redusoidut mallit saattavat mahdollistaa entistä tarkempien tai suuren mittakaavan hermosoluverkostojen simulaatiot. Valitsimme tähän tutkimukseen redusoinnin kohteiksi useita neurotieteessä rele- vantteja matemaattisia malleja, alkaen synaptisesta viestinnästä aivojen populaatiotason malleihin. Simuloimme malleja numeerisesti ja määritimme matemaattiset vaatimukset MOR-menetelmien soveltamiseksi jokaiseen malliin. Seuraavaksi tunnistimme kullekin mallille sopivat MOR-algoritmit ja toteutimme valitsemamme menetelmät laskennallisesti tehokkaalla tavalla. Lopuksi arvioimme redusoitujen mallien tarkkuutta ja nopeutta. Tutkimuksemme soveltavat MOR-menetelmiä mallityyppeihin, joita ei ole aiemmin tutkittu kyseisillä menetelmillä, laajentaen mahdollisuuksia MORin käyttöön laskennallisessa neurotieteessä sekä myös koneoppimisessa. Tutkimuksemme osoittavat, että MOR voi olla tehokas nopeutusstrategia hermosoluverkostomalleille ja keinotekoisille neuroverkoille, mikä tekee siitä arvokkaan työkalun aivojen laskennallisessa tutkimuksessa. MOR-menetelmät ovat hyödyllisiä, sillä redusoidun mallin perusteella on mahdollista rekonstruoida alkuperäinen malli. Redusointi ei poista mallista muuttujia tai heikennä sen morfologista resoluutiota. Tunnistimme Proper Orthogonal Decom- position (POD) -menetelmän yhdistettynä Discrete Empirical Interpolation Method (DEIM) -algoritmiin sopivaksi menetelmäksi valitsemiemme mallien redusointiin. Lisäksi otimme käyttöön useita viimeaikaisia edistyneitä muunnelmia näistä menetel-mistä. Ensisijainen este MOR-menetelmien soveltamiselle neurotieteessä on hermosolumallien epälineaarisuus. POD-DEIM -menetelmää voidaan käyttää myös epälineaaristen mallien redusointiin. Balanced Truncation ja Iterative Rational Krylovin Approximation -menetelmien muunnelmat epälineaaristen mallien approksimoin- tiin ovat myös lupaavia, mutta niiden käyttö vaatii redusoitavalta mallilta enemmän matemaattisia ominaisuuksia verrattuna POD-DEIM -menetelmiin. Saavutimme erinomaisen approksimaatiotarkkuuden ja nopeutuksen redusoimalla moniulotteista hermosolupopulaatiomallia ja synapsin kemiallisia reaktioita kuvaavaa mallia käyttämällä POD-DEIM -menetelmää. Biofysikaalisesti tarkan verkosto- mallin, joka kuvaa aktiopotentiaalin muodostumista ionivirtojen kautta, redusoinnin huomattiin hyötyvän simulaation aikana redusoitua mallia päivittävien MOR- menetelmien käytöstä. Osoitimme lisäksi, että MOR voidaan integroida syväoppimisverkkoihin ja että MOR on tehokas redusointistrategia konvoluutioverkkoihin, joita käytetään esimerkiksi näköhermoston tutkimuksessa. Tuloksemme osoittavat, että MOR on tehokas työkalu epälineaaristen hermo- soluverkostojen simulaatioiden nopeuttamiseen. Tämän väitöskirjan osajulkaisujen perusteella voimme todeta, että useita neurotieteellisesti relevantteja malleja ja mallityyppejä, joita ei ole aiemmin redusoitu, voidaan nopeuttaa käyttämällä MOR- menetelmiä. Tulevaisuudessa MOR-menetelmien integrointi aivosimulaatiotyökaluihin mahdollistaa mallien nopeamman kehittämisen ja uuden tiedon luomisen numeeristen simulaatioiden tehokkuutta, resoluutiota ja mittakaavaa parantamalla.In this thesis, we study the use of Model Order Reduction (MOR) methods for accelerating and reducing the computational burden of brain simulations. Mathematical modeling and numerical simulations are the primary tools of computational neuroscience, a field that strives to understand the brain by combining data and theories. Due to the complexity of brain cells and the neuronal networks they form, computer simulations cannot consider neuronal networks in biologically realistic detail. We apply MOR methods to derive lightweight reduced order models and show that they can approximate models of neuronal networks. Reduced order models may thus enable more detailed and large-scale simulations of neuronal systems. We selected several mathematical models that are used in neuronal network simulations, ranging from synaptic signaling to neuronal population models, to use as reduction targets in this thesis. We implemented the models and determined the mathematical requirements for applying MOR to each model. We then identified suitable MOR algorithms for each model and established efficient implementations of our selected methods. Finally, we evaluated the accuracy and speed of our reduced order models. Our studies apply MOR to model types that were not previously reduced using these methods, widening the possibilities for use of MOR in computational neuroscience and deep learning. In summary, the results of this thesis show that MOR can be an effective acceleration strategy for neuronal network models, making it a valuable tool for building large-scale simulations of the brain. MOR methods have the advantage that the reduced model can be used to reconstruct the original detailed model, hence the reduction process does not discard variables or decrease morphological resolution. We identified the Proper Orthogonal Decomposition (POD) combined with Discrete Empirical Interpolation Method (DEIM) as the most suitable tool for reducing our selected models. Additionally, we implemented several recent advanced variants of these methods. The primary obstacle of applying MOR in neuroscience is the nonlinearity of neuronal models, and POD-DEIM can account for that complexity. Extensions of the Balanced Truncation and Iterative Rational Krylov Approximation methods for nonlinear systems also show promise, but have stricter requirements than POD-DEIM with regards to the structure of the original model. Excellent accuracy and acceleration were found when reducing a high-dimensional mean-field model of a neuronal network and chemical reactions in the synapse, using the POD-DEIM method. We also found that a biophysical network, which models action potentials through ionic currents, benefits from the use of adaptive MOR methods that update the reduced model during the model simulation phase. We further show that MOR can be integrated to deep learning networks and that MOR is an effective reduction strategy for convolutional networks, used for example in vision research. Our results validate MOR as a powerful tool for accelerating simulations of nonlinear neuronal networks. Based on the original publications of this thesis, we can conclude that several models and model types of neuronal phenomena that were not previously reduced can be successfully accelerated using MOR methods. In the future, integrating MOR into brain simulation tools will enable faster development of models and extracting new knowledge from numerical studies through improved model efficiency, resolution and scale

    Promoting Sustainability through Next-Generation Biologics Drug Development

    Get PDF
    The fourth industrial revolution in 2011 aimed to transform the traditional manufacturing processes. As part of this revolution, disruptive innovations in drug development and data science approaches have the potential to optimize CMC (chemistry, manufacture, and control). The real-time simulation of processes using “digital twins” can maximize efficiency while improving sustainability. As part of this review, we investigate how the World Health Organization’s 17 sustainability goals can apply toward next-generation drug development. We analyze the state-of-the-art laboratory leadership, inclusive personnel recruiting, the latest therapy approaches, and intelligent process automation. We also outline how modern data science techniques and machine tools for CMC help to shorten drug development time, reduce failure rates, and minimize resource usage. Finally, we systematically analyze and compare existing approaches to our experiences with the high-throughput laboratory KIWI-biolab at the TU Berlin. We describe a sustainable business model that accelerates scientific innovations and supports global action toward a sustainable future.BMBF, 01DD20002A, Verbundprojekt: Internationales Zukunftslabor für KI-gestützte Bioprozessentwicklung "KIWI-biolab"; Teilvorhaben: Koordination und Aufbau eines KI-Exzellenzzentrum

    The role of mathematical models in designing mechanopharmacological therapies for asthma

    Get PDF
    Healthy lung function depends on a complex system of interactions which regulate the mechanical and biochemical environment of individual cells to the whole organ. Perturbations from these regulated processes give rise to significant lung dysfunction such as chronic inflammation, airway hyperresponsiveness and airway remodelling characteristic of asthma. Importantly, there is ongoing mechanobiological feedback where mechanical factors including airway stiffness and oscillatory loading have considerable influence over cell behavior. The recently proposed area of mechanophar-macology recognises these interactions and aims to highlight the need to consider mechanobiology when identifying and assessing pharmacological targets. However, these multiscale interactions can be difficult to study experimentally due to the need for measurements across a wide range of spatial and temporal scales. On the other hand, integrative multiscale mathematical models have begun to show success in simulating the interactions between different mechanobiological mechanisms or cell/tissue-types across multiple scales. When appropriately informed by experimental data, these models have the potential to serve as extremely useful predictive tools, where physical mechanisms and emergent behaviours can be probed or hypothesised and, more importantly, exploited to propose new mechanopharmacological therapies for asthma and other respiratory diseases. In this review, we first demonstrate via an exemplar, how a multiscale mathematical model of acute bron-choconstriction in an airway could be exploited to propose new mechanopharmacological therapies. We then review current mathematical modelling approaches in respiratory disease and highlight hypotheses generated by such models that could have significant implications for therapies in asthma, but that have not yet been the subject of experimental attention or investigation. Finally we highlight modelling approaches that have shown promise in other biological systems that could be brought to bear in developing mathematical models for optimisation of mechanopharmacolog-ical therapies in asthma, with discussion of how they could complement and accelerate current experimental approaches
    corecore