12,740 research outputs found
Dynamical laser spike processing
Novel materials and devices in photonics have the potential to revolutionize
optical information processing, beyond conventional binary-logic approaches.
Laser systems offer a rich repertoire of useful dynamical behaviors, including
the excitable dynamics also found in the time-resolved "spiking" of neurons.
Spiking reconciles the expressiveness and efficiency of analog processing with
the robustness and scalability of digital processing. We demonstrate that
graphene-coupled laser systems offer a unified low-level spike optical
processing paradigm that goes well beyond previously studied laser dynamics. We
show that this platform can simultaneously exhibit logic-level restoration,
cascadability and input-output isolation---fundamental challenges in optical
information processing. We also implement low-level spike-processing tasks that
are critical for higher level processing: temporal pattern detection and stable
recurrent memory. We study these properties in the context of a fiber laser
system, but the addition of graphene leads to a number of advantages which stem
from its unique properties, including high absorption and fast carrier
relaxation. These could lead to significant speed and efficiency improvements
in unconventional laser processing devices, and ongoing research on graphene
microfabrication promises compatibility with integrated laser platforms.Comment: 13 pages, 7 figure
EELBERT: Tiny Models through Dynamic Embeddings
We introduce EELBERT, an approach for compression of transformer-based models
(e.g., BERT), with minimal impact on the accuracy of downstream tasks. This is
achieved by replacing the input embedding layer of the model with dynamic, i.e.
on-the-fly, embedding computations. Since the input embedding layer accounts
for a significant fraction of the model size, especially for the smaller BERT
variants, replacing this layer with an embedding computation function helps us
reduce the model size significantly. Empirical evaluation on the GLUE benchmark
shows that our BERT variants (EELBERT) suffer minimal regression compared to
the traditional BERT models. Through this approach, we are able to develop our
smallest model UNO-EELBERT, which achieves a GLUE score within 4% of fully
trained BERT-tiny, while being 15x smaller (1.2 MB) in size.Comment: EMNLP 2023, Industry Track 9 pages, 2 figures, 5 table
- …