479 research outputs found

    The Landscape of Compute-near-memory and Compute-in-memory: A Research and Commercial Overview

    Full text link
    In today's data-centric world, where data fuels numerous application domains, with machine learning at the forefront, handling the enormous volume of data efficiently in terms of time and energy presents a formidable challenge. Conventional computing systems and accelerators are continually being pushed to their limits to stay competitive. In this context, computing near-memory (CNM) and computing-in-memory (CIM) have emerged as potentially game-changing paradigms. This survey introduces the basics of CNM and CIM architectures, including their underlying technologies and working principles. We focus particularly on CIM and CNM architectures that have either been prototyped or commercialized. While surveying the evolving CIM and CNM landscape in academia and industry, we discuss the potential benefits in terms of performance, energy, and cost, along with the challenges associated with these cutting-edge computing paradigms

    FPGA-based architectures for acoustic beamforming with microphone arrays : trends, challenges and research opportunities

    Get PDF
    Over the past decades, many systems composed of arrays of microphones have been developed to satisfy the quality demanded by acoustic applications. Such microphone arrays are sound acquisition systems composed of multiple microphones used to sample the sound field with spatial diversity. The relatively recent adoption of Field-Programmable Gate Arrays (FPGAs) to manage the audio data samples and to perform the signal processing operations such as filtering or beamforming has lead to customizable architectures able to satisfy the most demanding computational, power or performance acoustic applications. The presented work provides an overview of the current FPGA-based architectures and how FPGAs are exploited for different acoustic applications. Current trends on the use of this technology, pending challenges and open research opportunities on the use of FPGAs for acoustic applications using microphone arrays are presented and discussed

    TL-nvSRAM-CIM: Ultra-High-Density Three-Level ReRAM-Assisted Computing-in-nvSRAM with DC-Power Free Restore and Ternary MAC Operations

    Full text link
    Accommodating all the weights on-chip for large-scale NNs remains a great challenge for SRAM based computing-in-memory (SRAM-CIM) with limited on-chip capacity. Previous non-volatile SRAM-CIM (nvSRAM-CIM) addresses this issue by integrating high-density single-level ReRAMs on the top of high-efficiency SRAM-CIM for weight storage to eliminate the off-chip memory access. However, previous SL-nvSRAM-CIM suffers from poor scalability for an increased number of SL-ReRAMs and limited computing efficiency. To overcome these challenges, this work proposes an ultra-high-density three-level ReRAMs-assisted computing-in-nonvolatile-SRAM (TL-nvSRAM-CIM) scheme for large NN models. The clustered n-selector-n-ReRAM (cluster-nSnRs) is employed for reliable weight-restore with eliminated DC power. Furthermore, a ternary SRAM-CIM mechanism with differential computing scheme is proposed for energy-efficient ternary MAC operations while preserving high NN accuracy. The proposed TL-nvSRAM-CIM achieves 7.8x higher storage density, compared with the state-of-art works. Moreover, TL-nvSRAM-CIM shows up to 2.9x and 1.9x enhanced energy-efficiency, respectively, compared to the baseline designs of SRAM-CIM and ReRAM-CIM, respectively

    Study and design of an interface for remote audio processing

    Get PDF
    This project focused on the study and design of an interface for remote audio processing, with the objective of acquiring by filtering, biasing, and amplifying an analog signal before digitizing it by means of two MCP3208 ADCs to achieve a 24-bit resolution signal. The resulting digital signal was then transmitted to a Raspberry Pi using SPI protocol, where it was processed by a Flask server that could be accessed from both local and remote networks. The design of the PCB was a critical component of the project, as it had to accommodate various components and ensure accurate signal acquisition and transmission. The PCB design was created using KiCad software, which allowed for the precise placement and routing of all components. A major challenge in the design of the interface was to ensure that the analog signal was not distorted during acquisition and amplification. This was achieved through careful selection of amplifier components and using high-pass and low-pass filters to remove any unwanted noise. Once the analog signal was acquired and digitized, the resulting digital signal was transmitted to the Raspberry Pi using SPI protocol. The Raspberry Pi acted as the host for a Flask server, which could be accessed from local and remote networks using a web browser. The Flask server allowed for the processing of the digital signal and provided a user interface for controlling the gain and filtering parameters of the analog signal. This enabled the user to adjust the signal parameters to suit their specific requirements, making the interface highly flexible and adaptable to a variety of audio processing applications. The final interface was capable of remote audio processing, making it highly useful in scenarios where the audio signal needed to be acquired and processed in a location separate from the user. For example, it could be used in a recording studio, where the audio signal from the microphone could be remotely processed using the interface. The gain and filtering parameters could be adjusted in real-time, allowing the sound engineer to fine-tune the audio signal to produce the desired recording. In conclusion, the project demonstrated the feasibility and potential benefits of using a remote audio processing system for various applications. The design of the PCB, selection of components, and use of the Flask server enabled the creation of an interface that was highly flexible, accurate, and adaptable to a variety of audio processing requirements. Overall, the project represents a significant step forward in the field of remote audio processing, with the potential to benefit many different applications in the future

    Frequency diversity wideband digital receiver and signal processor for solid-state dual-polarimetric weather radars

    Get PDF
    2012 Summer.Includes bibliographical references.The recent spate in the use of solid-state transmitters for weather radar systems has unexceptionably revolutionized the research in meteorology. The solid-state transmitters allow transmission of low peak powers without losing the radar range resolution by allowing the use of pulse compression waveforms. In this research, a novel frequency-diversity wideband waveform is proposed and realized to extenuate the low sensitivity of solid-state radars and mitigate the blind range problem tied with the longer pulse compression waveforms. The latest developments in the computing landscape have permitted the design of wideband digital receivers which can process this novel waveform on Field Programmable Gate Array (FPGA) chips. In terms of signal processing, wideband systems are generally characterized by the fact that the bandwidth of the signal of interest is comparable to the sampled bandwidth; that is, a band of frequencies must be selected and filtered out from a comparable spectral window in which the signal might occur. The development of such a wideband digital receiver opens a window for exciting research opportunities for improved estimation of precipitation measurements for higher frequency systems such as X, Ku and Ka bands, satellite-borne radars and other solid-state ground-based radars. This research describes various unique challenges associated with the design of a multi-channel wideband receiver. The receiver consists of twelve channels which simultaneously downconvert and filter the digitized intermediate-frequency (IF) signal for radar data processing. The product processing for the multi-channel digital receiver mandates a software and network architecture which provides for generating and archiving a single meteorological product profile culled from multi-pulse profiles at an increased data date. The multi-channel digital receiver also continuously samples the transmit pulse for calibration of radar receiver gain and transmit power. The multi-channel digital receiver has been successfully deployed as a key component in the recently developed National Aeronautical and Space Administration (NASA) Global Precipitation Measurement (GPM) Dual-Frequency Dual-Polarization Doppler Radar (D3R). The D3R is the principal ground validation instrument for the precipitation measurements of the Dual Precipitation Radar (DPR) onboard the GPM Core Observatory satellite scheduled for launch in 2014. The D3R system employs two broadly separated frequencies at Ku- and Ka-bands that together make measurements for precipitation types which need higher sensitivity such as light rain, drizzle and snow. This research describes unique design space to configure the digital receiver for D3R at several processing levels. At length, this research presents analysis and results obtained by employing the multi-carrier waveforms for D3R during the 2012 GPM Cold-Season Precipitation Experiment (GCPEx) campaign in Canada

    Vibrating Cantilever Transducer Incorporated in Dual Diaphragms Structure for Sensing Differential Pneumatic Pressure

    Full text link

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Simulation and implementation of novel deep learning hardware architectures for resource constrained devices

    Get PDF
    Corey Lammie designed mixed signal memristive-complementary metal–oxide–semiconductor (CMOS) and field programmable gate arrays (FPGA) hardware architectures, which were used to reduce the power and resource requirements of Deep Learning (DL) systems; both during inference and training. Disruptive design methodologies, such as those explored in this thesis, can be used to facilitate the design of next-generation DL systems

    List-mode data acquisition based on digital electronics - State-of-the-art report

    Get PDF
    This report deals with digital radiation detection systems employing list-mode data collection, which improves data analysis capabilities. Future data acquisition systems shall also ultimately enable the movement of detection data from first responders electronically to analysis centres rather than the costly and time consuming process of moving experts and/or samples. This new technology is especially useful in crisis events, when time and resources are sparse and increased analysis capacity is required. In order to utilise the opportunities opened by these new technologies, the systems have to be interoperable, so that the data from each type of detector can easily be analysed by different analysis centres. Successful interoperability of the systems requires that European and/or international standards are devised for the digitised data format. The basis of such a format is a list of registered events detailing an estimate of the energy of the detected radiation, along with an accurate time-stamp for recorded events (and optionally other parameters describing each event).JRC.G.5-Security technology assessmen
    • …
    corecore