1,186 research outputs found
Transfer Learning for Inverse Design of Tunable Graphene-Based Metasurfaces
This paper outlines a new approach to designing tunable electromagnetic (EM)
graphene-based metasurfaces using convolutional neural networks (CNNs). EM
metasurfaces have previously been used to manipulate EM waves by adjusting the
local phase of subwavelength elements within the wavelength scale, resulting in
a variety of intriguing devices. However, the majority of these devices have
only been capable of performing a single function, making it difficult to
achieve multiple functionalities in a single design. Graphene, as an active
material, offers unique properties, such as tunability, making it an excellent
candidate for achieving tunable metasurfaces. The proposed procedure involves
using two CNNs to design the passive structure of the graphene metasurfaces and
predict the chemical potentials required for tunable responses. The CNNs are
trained using transfer learning, which significantly reduced the time required
to collect the training dataset. The proposed inverse design methodology
demonstrates excellent performance in designing reconfigurable EM metasurfaces,
which can be tuned to produce multiple functions, making it highly valuable for
various applications. The results indicate that the proposed approach is
efficient and accurate and provides a promising method for designing
reconfigurable intelligent surfaces for future wireless communication systems
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks
Fully realizing the potential of acceleration for Deep Neural Networks (DNNs)
requires understanding and leveraging algorithmic properties. This paper builds
upon the algorithmic insight that bitwidth of operations in DNNs can be reduced
without compromising their classification accuracy. However, to prevent
accuracy loss, the bitwidth varies significantly across DNNs and it may even be
adjusted for each layer. Thus, a fixed-bitwidth accelerator would either offer
limited benefits to accommodate the worst-case bitwidth requirements, or lead
to a degradation in final accuracy. To alleviate these deficiencies, this work
introduces dynamic bit-level fusion/decomposition as a new dimension in the
design of DNN accelerators. We explore this dimension by designing Bit Fusion,
a bit-flexible accelerator, that constitutes an array of bit-level processing
elements that dynamically fuse to match the bitwidth of individual DNN layers.
This flexibility in the architecture enables minimizing the computation and the
communication at the finest granularity possible with no loss in accuracy. We
evaluate the benefits of BitFusion using eight real-world feed-forward and
recurrent DNNs. The proposed microarchitecture is implemented in Verilog and
synthesized in 45 nm technology. Using the synthesis results and cycle accurate
simulation, we compare the benefits of Bit Fusion to two state-of-the-art DNN
accelerators, Eyeriss and Stripes. In the same area, frequency, and process
technology, BitFusion offers 3.9x speedup and 5.1x energy savings over Eyeriss.
Compared to Stripes, BitFusion provides 2.6x speedup and 3.9x energy reduction
at 45 nm node when BitFusion area and frequency are set to those of Stripes.
Scaling to GPU technology node of 16 nm, BitFusion almost matches the
performance of a 250-Watt Titan Xp, which uses 8-bit vector instructions, while
BitFusion merely consumes 895 milliwatts of power
- …