2 research outputs found
Far-field subwavelength acoustic imaging by deep learning
Seeing and recognizing an object whose size is much smaller than the
illumination wavelength is a challenging task for an observer placed in the far
field, due to the diffraction limit. Recent advances in near and far field
microscopy have offered several ways to overcome this limitation; however, they
often use invasive markers and require intricate equipment with complicated
image post-processing. On the other hand, a simple marker-free solution for
high-resolution imaging may be found by exploiting resonant metamaterial lenses
that can convert the subwavelength image information contained in the
near-field of the object to propagating field components that can reach the far
field. Unfortunately, resonant metalenses are inevitably sensitive to
absorption losses, which has so far largely hindered their practical
applications. Here, we solve this vexing problem and show that this limitation
can be turned into an advantage when metalenses are combined with deep learning
techniques. We demonstrate that combining deep learning with lossy metalenses
allows recognizing and imaging largely subwavelength features directly from the
far field. Our acoustic learning experiment shows that, despite being thirty
times smaller than the wavelength of sound, the fine details of images can be
successfully reconstructed and recognized in the far field, which is crucially
enabled by the presence of absorption. We envision applications in acoustic
image analysis, feature detection, object classification, or as a novel
noninvasive acoustic sensing tool in biomedical applications
Deep Learning Development Environment in Virtual Reality
Virtual reality (VR) offers immersive visualization and intuitive
interaction. We leverage VR to enable any biomedical professional to deploy a
deep learning (DL) model for image classification. While DL models can be
powerful tools for data analysis, they are also challenging to understand and
develop. To make deep learning more accessible and intuitive, we have built a
virtual reality-based DL development environment. Within our environment, the
user can move tangible objects to construct a neural network only using their
hands. Our software automatically translates these configurations into a
trainable model and then reports its resulting accuracy on a test dataset in
real-time. Furthermore, we have enriched the virtual objects with
visualizations of the model's components such that users can achieve insight
about the DL models that they are developing. With this approach, we bridge the
gap between professionals in different fields of expertise while offering a
novel perspective for model analysis and data interaction. We further suggest
that techniques of development and visualization in deep learning can benefit
by integrating virtual reality