423 research outputs found

    The future of computing beyond Moore's Law.

    Get PDF
    Moore's Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore's Law for 50 years is failing and is anticipated to flatten by 2025. This article provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this article covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'

    Photonics for Smart Cities

    Get PDF
    We review the current applications of photonic technologies to Smart Cities. Inspired by the future needs of Smart Cities, we then propose potential applications of advanced photonic technologies. We find that photonics already has a major impact on Smart Cities, in terms of smart lighting, sensing, and communication technologies. We further find that advanced photonic technologies could lead to vastly improved infrastructure, such as smart water‐supply systems. We conclude by proposing directions for future research that will have the greatest impact on realizing Smart City initiatives

    Beyond DNA origami: the unfolding prospects of nucleic acid nanotechnology

    Full text link
    Nucleic acid nanotechnology exploits the programmable molecular recognition properties of natural and synthetic nucleic acids to assemble structures with nanometer‐scale precision. In 2006, DNA origami transformed the field by providing a versatile platform for self‐assembly of arbitrary shapes from one long DNA strand held in place by hundreds of short, site‐specific (spatially addressable) DNA ‘staples’. This revolutionary approach has led to the creation of a multitude of two‐dimensional and three‐dimensional scaffolds that form the basis for functional nanodevices. Not limited to nucleic acids, these nanodevices can incorporate other structural and functional materials, such as proteins and nanoparticles, making them broadly useful for current and future applications in emerging fields such as nanomedicine, nanoelectronics, and alternative energy. WIREs Nanomed Nanobiotechnol 2012, 4:139–152. doi: 10.1002/wnan.170 For further resources related to this article, please visit the WIREs website .Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/90282/1/170_ftp.pd

    Reservoir computing based on delay-dynamical systems

    Get PDF
    Today, except for mathematical operations, our brain functions much faster and more efficient than any supercomputer. It is precisely this form of information processing in neural networks that inspires researchers to create systems that mimic the brain’s information processing capabilities. In this thesis we propose a novel approach to implement these alternative computer architectures, based on delayed feedback. We show that one single nonlinear node with delayed feedback can replace a large network of nonlinear nodes. First we numerically investigate the architecture and performance of delayed feedback systems as information processing units. Then we elaborate on electronic and opto-electronic implementations of the concept. Next to evaluating their performance for standard benchmarks, we also study task independent properties of the system, extracting information on how to further improve the initial scheme. Finally, some simple modifications are suggested, yielding improvements in terms of speed or performanc

    Learning Spiking Neural Network from Easy to Hard task

    Full text link
    Starting with small and simple concepts, and gradually introducing complex and difficult concepts is the natural process of human learning. Spiking Neural Networks (SNNs) aim to mimic the way humans process information, but current SNNs models treat all samples equally, which does not align with the principles of human learning and overlooks the biological plausibility of SNNs. To address this, we propose a CL-SNN model that introduces Curriculum Learning(CL) into SNNs, making SNNs learn more like humans and providing higher biological interpretability. CL is a training strategy that advocates presenting easier data to models before gradually introducing more challenging data, mimicking the human learning process. We use a confidence-aware loss to measure and process the samples with different difficulty levels. By learning the confidence of different samples, the model reduces the contribution of difficult samples to parameter optimization automatically. We conducted experiments on static image datasets MNIST, Fashion-MNIST, CIFAR10, and neuromorphic datasets N-MNIST, CIFAR10-DVS, DVS-Gesture. The results are promising. To our best knowledge, this is the first proposal to enhance the biologically plausibility of SNNs by introducing CL

    Configured Quantum Reservoir Computing for Multi-Task Machine Learning

    Full text link
    Amidst the rapid advancements in experimental technology, noise-intermediate-scale quantum (NISQ) devices have become increasingly programmable, offering versatile opportunities to leverage quantum computational advantage. Here we explore the intricate dynamics of programmable NISQ devices for quantum reservoir computing. Using a genetic algorithm to configure the quantum reservoir dynamics, we systematically enhance the learning performance. Remarkably, a single configured quantum reservoir can simultaneously learn multiple tasks, including a synthetic oscillatory network of transcriptional regulators, chaotic motifs in gene regulatory networks, and the fractional-order Chua's circuit. Our configured quantum reservoir computing yields highly precise predictions for these learning tasks, outperforming classical reservoir computing. We also test the configured quantum reservoir computing in foreign exchange (FX) market applications and demonstrate its capability to capture the stochastic evolution of the exchange rates with significantly greater accuracy than classical reservoir computing approaches. Through comparison with classical reservoir computing, we highlight the unique role of quantum coherence in the quantum reservoir, which underpins its exceptional learning performance. Our findings suggest the exciting potential of configured quantum reservoir computing for exploiting the quantum computation power of NISQ devices in developing artificial general intelligence

    Spiking Neural Network for Ultra-low-latency and High-accurate Object Detection

    Full text link
    Spiking Neural Networks (SNNs) have garnered widespread interest for their energy efficiency and brain-inspired event-driven properties. While recent methods like Spiking-YOLO have expanded the SNNs to more challenging object detection tasks, they often suffer from high latency and low detection accuracy, making them difficult to deploy on latency sensitive mobile platforms. Furthermore, the conversion method from Artificial Neural Networks (ANNs) to SNNs is hard to maintain the complete structure of the ANNs, resulting in poor feature representation and high conversion errors. To address these challenges, we propose two methods: timesteps compression and spike-time-dependent integrated (STDI) coding. The former reduces the timesteps required in ANN-SNN conversion by compressing information, while the latter sets a time-varying threshold to expand the information holding capacity. We also present a SNN-based ultra-low latency and high accurate object detection model (SUHD) that achieves state-of-the-art performance on nontrivial datasets like PASCAL VOC and MS COCO, with about remarkable 750x fewer timesteps and 30% mean average precision (mAP) improvement, compared to the Spiking-YOLO on MS COCO datasets. To the best of our knowledge, SUHD is the deepest spike-based object detection model to date that achieves ultra low timesteps to complete the lossless conversion.Comment: 14 pages, 10 figure

    Dual sampling neural network: Learning without explicit optimization

    Get PDF
    è„łćž‹äșșć·„çŸ„èƒœăźćźŸçŸă«ć‘ă‘ăŸæ–°ç†è«–ăźæ§‹çŻ‰ă«æˆćŠŸ --ăƒ’ăƒłăƒˆăŻè„łăźă‚·ăƒŠăƒ—ă‚čぼ「æșらぎ」--. äșŹéƒœć€§ć­Šăƒ—ăƒŹă‚čăƒȘăƒȘăƒŒă‚č. 2022-10-24.Artificial intelligence using neural networks has achieved remarkable success. However, optimization procedures of the learning algorithms require global and synchronous operations of variables, making it difficult to realize neuromorphic hardware, a promising candidate of low-cost and energy-efficient artificial intelligence. The optimization of learning algorithms also fails to explain the recently observed criticality of the brain. Cortical neurons show a critical power law implying the best balance between expressivity and robustness of the neural code. However, the optimization gives less robust codes without the criticality. To solve these two problems simultaneously, we propose a model neural network, dual sampling neural network, in which both neurons and synapses are commonly represented as a probabilistic bit like in the brain. The network can learn external signals without explicit optimization and stably retain memories while all entities are stochastic because seemingly optimized macroscopic behavior emerges from the microscopic stochasticity. The model reproduces various experimental results, including the critical power law. Providing a conceptual framework for computation by microscopic stochasticity without macroscopic optimization, the model will be a fundamental tool for developing scalable neuromorphic devices and revealing neural computation and learning
    • 

    corecore