A critical feature in signal processing is the ability to interpret
correlations in time series signals, such as speech. Machine learning systems
process this contextual information by tracking internal states in recurrent
neural networks (RNNs), but these can cause memory and processor bottlenecks in
applications from edge devices to data centers, motivating research into new
analog inference architectures. But whereas photonic accelerators, in
particular, have demonstrated big leaps in uni-directional feedforward deep
neural network (DNN) inference, the bi-directional architecture of RNNs
presents a unique challenge: the need for a short-term memory that (i)
programmably transforms optical waveforms with phase coherence , (ii) minimizes
added noise, and (iii) enables programmable readily scales to large neuron
counts. Here, we address this challenge by introducing an optoacoustic
recurrent operator (OREO) that simultaneously meets (i,ii,iii). Specifically,
we experimentally demonstrate an OREO that contextualizes and computes the
information carried by a sequence of optical pulses via acoustic waves. We show
that the acoustic waves act as a link between the different optical pulses,
capturing the optical information and using it to manipulate the subsequent
operations. Our approach can be controlled completely optically on a
pulse-by-pulse basis, offering simple reconfigurability for a use case-specific
optimization. We use this feature to demonstrate a recurrent drop-out, which
excludes optical input pulses from the recurrent operation. We furthermore
apply OREO as an acceptor to recognize up-to 27 patterns in a sequence of
optical pulses. Finally, we introduce a DNN architecture that uses the OREO as
bi-directional perceptrons to enable new classes of DNNs in coherent optical
signal processing