2 research outputs found

    Tricking AI chips into simulating the human brain:A detailed performance analysis

    Get PDF
    In recent years, significant strides in Artificial Intelligence (AI) have led to various practical applications, primarily centered around training and deployment of deep neural networks (DNNs). These applications, however, require considerable computational resources, predominantly reliant on modern Graphics-Processing Units (GPUs). Yet, the quest for larger and faster DNNs has spurred the creation of specialized AI chips and efficient Machine-Learning (ML) software tools like TensorFlow and PyTorch have been developed for striking a balance between usability and performance. Simultaneously, the field of computational neuroscience shares a similar quest for increased computational power to simulate more extensive and detailed brain models, while also keeping usability high. Although GPUs have also entered this field, programming complexity remains high, resulting in cumbersome simulations. Inspired by AI progress, we introduce a workflow for easily accelerating brain simulations using TensorFlow and evaluate the performance of various, cutting-edge AI chips – including the Graphcore Intelligence-Processing Unit (IPU), GroqChip, Nvidia GPU with Tensor Cores, and Google Tensor-Processing Unit (TPU) – when simulating a biologically detailed as well as simpler brain models. Our model simulations explore the architectural tradeoffs of a modern-day CPU and these four AI platforms by varying computational density, memory requirements and floating-point numerical accuracy. Results show that the GroqChip achieves the best performance for small networks, yet is unable to simulate large-scale networks. At the scale of mammalian brains, the GPU, IPU and TPU achieve speedups ranging from 29x to 1,208x times over CPU runtimes. Remarkably, the TPU sets a new record for the largest, real-time simulation of the inferior-olivary nucleus in the brain. Reduced-accuracy floating-point implementations make some simulation results unreliable for brain research, notably for the GroqChip. Consequently, this work underscores the potential of ML libraries for accelerating brain simulations as well as the critical role of AI-chip numerical accuracy for biophysically realistic brain models.</p

    Multi-ancestry genome-wide study identifies effector genes and druggable pathways for coronary artery calcification

    No full text
    Coronary artery calcification (CAC), a measure of subclinical atherosclerosis, predicts future symptomatic coronary artery disease (CAD). Identifying genetic risk factors for CAC may point to new therapeutic avenues for prevention. Currently, there are only four known risk loci for CAC identified from genome-wide association studies (GWAS) in the general population. Here we conducted the largest multi-ancestry GWAS meta-analysis of CAC to date, which comprised 26,909 individuals of European ancestry and 8,867 individuals of African ancestry. We identified 11 independent risk loci, of which eight were new for CAC and five had not been reported for CAD. These new CAC loci are related to bone mineralization, phosphate catabolism and hormone metabolic pathways. Several new loci harbor candidate causal genes supported by multiple lines of functional evidence and are regulators of smooth muscle cell-mediated calcification ex vivo and in vitro. Together, these findings help refine the genetic architecture of CAC and extend our understanding of the biological and potential druggable pathways underlying CAC.</p
    corecore