270 research outputs found
2022 roadmap on neuromorphic computing and engineering
Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 10 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community
NASA Tech Briefs, February 2010
Topics covered include: Insulation-Testing Cryostat With Lifting Mechanism; Optical Testing of Retroreflectors for Cryogenic Applications; Measuring Cyclic Error in Laser Heterodyne Interferometers; Self-Referencing Hartmann Test for Large-Aperture Telescopes; Measuring a Fiber-Optic Delay Line Using a Mode-Locked Laser; Reconfigurable Hardware for Compressing Hyperspectral Image Data; Spatio-Temporal Equalizer for a Receiving-Antenna Feed Array; High-Speed Ring Bus; Nanoionics-Based Switches for Radio-Frequency Applications; Lunar Dust-Tolerant Electrical Connector; Compact, Reliable EEPROM Controller; Quad-Chip Double-Balanced Frequency Tripler; Ka-Band Waveguide Two-Way Hybrid Combiner for MMIC Amplifiers; Radiation-Hardened Solid-State Drive; Use of Nanofibers to Strengthen Hydrogels of Silica, Other Oxides, and Aerogels; Two Concepts for Deployable Trusses; Concentric Nested Toroidal Inflatable Structures; Investigating Dynamics of Eccentricity in Turbomachines; Improved Low-Temperature Performance of Li-Ion Cells Using New Electrolytes; Integrity Monitoring of Mercury Discharge Lamps; White-Light Phase-Conjugate Mirrors as Distortion Correctors; Biasable, Balanced, Fundamental Submillimeter Monolithic Membrane Mixer; ICER-3D Hyperspectral Image Compression Software; and Context Modeler for Wavelet Compression of Spectral Hyperspectral Images
Near-field MIMO communication links
A procedure to achieve near-field multiple input multiple output (MIMO) communication with equally strong channels is demonstrated in this paper. This has applications in near-field wireless communications, such as Chip-to-Chip (C2C) communication or wireless links between printed circuit boards. Designing the architecture of these wireless C2C networks is, however, based on standard engineering design tools. To attain this goal, a network optimization procedure is proposed, which introduces decoupling and matching networks. As a demonstration, this optimization procedure is applied to a 2-by-2 MIMO with dipole antennas. The potential benefits and design trade-offs are discussed for implementation of wireless radio-frequency interconnects in chip-to-chip or device-to-device communication such as in an Internet-of-Things scenario
Robust and Traffic Aware Medium Access Control Mechanisms for Energy-Efficient mm-Wave Wireless Network-on-Chip Architectures
To cater to the performance/watt needs, processors with multiple processing cores on the same chip have become the de-facto design choice. In such multicore systems, Network-on-Chip (NoC) serves as a communication infrastructure for data transfer among the cores on the chip. However, conventional metallic interconnect based NoCs are constrained by their long multi-hop latencies and high power consumption, limiting the performance gain in these systems. Among, different alternatives, due to the CMOS compatibility and energy-efficiency, low-latency wireless interconnect operating in the millimeter wave (mm-wave) band is nearer term solution to this multi-hop communication problem. This has led to the recent exploration of millimeter-wave (mm-wave) wireless technologies in wireless NoC architectures (WiNoC).
To realize the mm-wave wireless interconnect in a WiNoC, a wireless interface (WI) equipped with on-chip antenna and transceiver circuit operating at 60GHz frequency range is integrated to the ports of some NoC switches. The WIs are also equipped with a medium access control (MAC) mechanism that ensures a collision free and energy-efficient communication among the WIs located at different parts on the chip. However, due to shrinking feature size and complex integration in CMOS technology, high-density chips like multicore systems are prone to manufacturing defects and dynamic faults during chip operation. Such failures can result in permanently broken wireless links or cause the MAC to malfunction in a WiNoC. Consequently, the energy-efficient communication through the wireless medium will be compromised. Furthermore, the energy efficiency in the wireless channel access is also dependent on the traffic pattern of the applications running on the multicore systems. Due to the bursty and self-similar nature of the NoC traffic patterns, the traffic demand of the WIs can vary both spatially and temporally. Ineffective management of such traffic variation of the WIs, limits the performance and energy benefits of the novel mm-wave interconnect technology. Hence, to utilize the full potential of the novel mm-wave interconnect technology in WiNoCs, design of a simple, fair, robust, and efficient MAC is of paramount importance.
The main goal of this dissertation is to propose the design principles for robust and traffic-aware MAC mechanisms to provide high bandwidth, low latency, and energy-efficient data communication in mm-wave WiNoCs. The proposed solution has two parts. In the first part, we propose the cross-layer design methodology of robust WiNoC architecture that can minimize the effect of permanent failure of the wireless links and recover from transient failures caused by single event upsets (SEU). Then, in the second part, we present a traffic-aware MAC mechanism that can adjust the transmission slots of the WIs based on the traffic demand of the WIs. The proposed MAC is also robust against the failure of the wireless access mechanism. Finally, as future research directions, this idea of traffic awareness is extended throughout the whole NoC by enabling adaptiveness in both wired and wireless interconnection fabric
A High Speed Networked Signal Processing Platform for Multi-element Radio Telescopes
A new architecture is presented for a Networked Signal Processing System
(NSPS) suitable for handling the real-time signal processing of multi-element
radio telescopes. In this system, a multi-element radio telescope is viewed as
an application of a multi-sensor, data fusion problem which can be decomposed
into a general set of computing and network components for which a practical
and scalable architecture is enabled by current technology. The need for such a
system arose in the context of an ongoing program for reconfiguring the Ooty
Radio Telescope (ORT) as a programmable 264-element array, which will enable
several new observing capabilities for large scale surveys on this mature
telescope. For this application, it is necessary to manage, route and combine
large volumes of data whose real-time collation requires large I/O bandwidths
to be sustained. Since these are general requirements of many multi-sensor
fusion applications, we first describe the basic architecture of the NSPS in
terms of a Fusion Tree before elaborating on its application for the ORT. The
paper addresses issues relating to high speed distributed data acquisition,
Field Programmable Gate Array (FPGA) based peer-to-peer networks supporting
significant on-the fly processing while routing, and providing a last mile
interface to a typical commodity network like Gigabit Ethernet. The system is
fundamentally a pair of two co-operative networks, among which one is part of a
commodity high performance computer cluster and the other is based on
Commercial-Off The-Shelf (COTS) technology with support from software/firmware
components in the public domain.Comment: 19 pages, 4 eps figures, To be published in Experimental Astronomy
(Springer
- …