5 research outputs found
Flash Photography for Data-Driven Hidden Scene Recovery
Vehicles, search and rescue personnel, and endoscopes use flash lights to
locate, identify, and view objects in their surroundings. Here we show the
first steps of how all these tasks can be done around corners with consumer
cameras. Recent techniques for NLOS imaging using consumer cameras have not
been able to both localize and identify the hidden object. We introduce a
method that couples traditional geometric understanding and data-driven
techniques. To avoid the limitation of large dataset gathering, we train the
data-driven models on rendered samples to computationally recover the hidden
scene on real data. The method has three independent operating modes: 1) a
regression output to localize a hidden object in 2D, 2) an identification
output to identify the object type or pose, and 3) a generative network to
reconstruct the hidden scene from a new viewpoint. The method is able to
localize 12cm wide hidden objects in 2D with 1.7cm accuracy. The method also
identifies the hidden object class with 87.7% accuracy (compared to 33.3%
random accuracy). This paper also provides an analysis on the distribution of
information that encodes the occluded object in the accessible scene. We show
that, unlike previously thought, the area that extends beyond the corner is
essential for accurate object localization and identification
ExpertMatcher: Automating ML Model Selection for Clients using Hidden Representations
Recently, there has been the development of Split Learning, a framework for
distributed computation where model components are split between the client and
server (Vepakomma et al., 2018b). As Split Learning scales to include many
different model components, there needs to be a method of matching client-side
model components with the best server-side model components. A solution to this
problem was introduced in the ExpertMatcher (Sharma et al., 2019) framework,
which uses autoencoders to match raw data to models. In this work, we propose
an extension of ExpertMatcher, where matching can be performed without the need
to share the client's raw data representation. The technique is applicable to
situations where there are local clients and centralized expert ML models, but
the sharing of raw data is constrained.Comment: In NeurIPS Workshop on Robust AI in Financial Services: Data,
Fairness, Explainability, Trustworthiness, and Privacy, 201
Deep Shape from Polarization
This paper makes a first attempt to bring the Shape from Polarization (SfP)
problem to the realm of deep learning. The previous state-of-the-art methods
for SfP have been purely physics-based. We see value in these principled
models, and blend these physical models as priors into a neural network
architecture. This proposed approach achieves results that exceed the previous
state-of-the-art on a challenging dataset we introduce. This dataset consists
of polarization images taken over a range of object textures, paints, and
lighting conditions. We report that our proposed method achieves the lowest
test error on each tested condition in our dataset, showing the value of
blending data-driven and physics-driven approaches
Adaptive Lighting for Data-Driven Non-Line-of-Sight 3D Localization and Object Identification
Non-line-of-sight (NLOS) imaging of objects not visible to either the camera
or illumination source is a challenging task with vital applications including
surveillance and robotics. Recent NLOS reconstruction advances have been
achieved using time-resolved measurements which requires expensive and
specialized detectors and laser sources. In contrast, we propose a data-driven
approach for NLOS 3D localization and object identification requiring only a
conventional camera and projector. To generalize to complex line-of-sight (LOS)
scenes with non-planar surfaces and occlusions, we introduce an adaptive
lighting algorithm. This algorithm, based on radiosity, identifies and
illuminates scene patches in the LOS which most contribute to the NLOS light
paths, and can factor in system power constraints. We achieve an average
identification of 87.1% object identification for four classes of objects, and
average localization of the NLOS object's centroid with a mean-squared error
(MSE) of 1.97 cm in the occluded region for real data taken from a hardware
prototype. These results demonstrate the advantage of combining the physics of
light transport with active illumination for data-driven NLOS imaging
Recent Advances in Imaging Around Corners
Seeing around corners, also known as non-line-of-sight (NLOS) imaging is a
computational method to resolve or recover objects hidden around corners.
Recent advances in imaging around corners have gained significant interest.
This paper reviews different types of existing NLOS imaging techniques and
discusses the challenges that need to be addressed, especially for their
applications outside of a constrained laboratory environment. Our goal is to
introduce this topic to broader research communities as well as provide
insights that would lead to further developments in this research area