34 research outputs found
Transfer Learning in General Lensless Imaging through Scattering Media
Recently deep neural networks (DNNs) have been successfully introduced to the
field of lensless imaging through scattering media. By solving an inverse
problem in computational imaging, DNNs can overcome several shortcomings in the
conventional lensless imaging through scattering media methods, namely, high
cost, poor quality, complex control, and poor anti-interference. However, for
training, a large number of training samples on various datasets have to be
collected, with a DNN trained on one dataset generally performing poorly for
recovering images from another dataset. The underlying reason is that lensless
imaging through scattering media is a high dimensional regression problem and
it is difficult to obtain an analytical solution. In this work, transfer
learning is proposed to address this issue. Our main idea is to train a DNN on
a relatively complex dataset using a large number of training samples and
fine-tune the last few layers using very few samples from other datasets.
Instead of the thousands of samples required to train from scratch, transfer
learning alleviates the problem of costly data acquisition. Specifically,
considering the difference in sample sizes and similarity among datasets, we
propose two DNN architectures, namely LISMU-FCN and LISMU-OCN, and a balance
loss function designed for balancing smoothness and sharpness. LISMU-FCN, with
much fewer parameters, can achieve imaging across similar datasets while
LISMU-OCN can achieve imaging across significantly different datasets. What's
more, we establish a set of simulation algorithms which are close to the real
experiment, and it is of great significance and practical value in the research
on lensless scattering imaging. In summary, this work provides a new solution
for lensless imaging through scattering media using transfer learning in DNNs
Emergent Bio-Functional Similarities in a Cortical-Spike-Train-Decoding Spiking Neural Network Facilitate Predictions of Neural Computation
Despite its better bio-plausibility, goal-driven spiking neural network (SNN)
has not achieved applicable performance for classifying biological spike
trains, and showed little bio-functional similarities compared to traditional
artificial neural networks. In this study, we proposed the motorSRNN, a
recurrent SNN topologically inspired by the neural motor circuit of primates.
By employing the motorSRNN in decoding spike trains from the primary motor
cortex of monkeys, we achieved a good balance between classification accuracy
and energy consumption. The motorSRNN communicated with the input by capturing
and cultivating more cosine-tuning, an essential property of neurons in the
motor cortex, and maintained its stability during training. Such
training-induced cultivation and persistency of cosine-tuning was also observed
in our monkeys. Moreover, the motorSRNN produced additional bio-functional
similarities at the single-neuron, population, and circuit levels,
demonstrating biological authenticity. Thereby, ablation studies on motorSRNN
have suggested long-term stable feedback synapses contribute to the
training-induced cultivation in the motor cortex. Besides these novel findings
and predictions, we offer a new framework for building authentic models of
neural computation