6 research outputs found

    Optimal stopping times for estimating Bernoulli parameters with applications to active imaging

    Full text link
    We address the problem of estimating the parameter of a Bernoulli process. This arises in many applications, including photon-efficient active imaging where each illumination period is regarded as a single Bernoulli trial. We introduce a framework within which to minimize the mean-squared error (MSE) subject to an upper bound on the mean number of trials. This optimization has several simple and intuitive properties when the Bernoulli parameter has a beta prior. In addition, by exploiting typical spatial correlation using total variation regularization, we extend the developed framework to a rectangular array of Bernoulli processes representing the pixels in a natural scene. In simulations inspired by realistic active imaging scenarios, we demonstrate a 4.26 dB reduction in MSE due to the adaptive acquisition, as an average over many independent experiments and invariant to a factor of 3.4 variation in trial budget.Accepted manuscrip

    Learning-based Methods for Occluder-aided Non-Line-of-Sight Imaging

    No full text
    Imaging scenes that are not in our direct line-of-sight, referred to as non-line-of-sight (NLOS) imaging, has recently gained considerable attention from the computational imaging community. With a diverse set of potential applications in several domains, NLOS imaging is an emerging topic with many unanswered questions despite the progress made in the last decade. In this thesis, we aim to find answers to some of these questions by focusing on a popular NLOS imaging setting, namely occluder-aided imaging, which exploits occluding structure in the scenes to extract information from the hidden scenes. We do this by first focusing on the scene classification problem, where we study the problem of identifying individuals by exploiting shadows cast by occluding objects on a diffuse surface. In particular, we develop a learning-based method that discovers hidden cues in the shadows and relies on building synthetic scenes composed of 3D face models obtained from a single photograph of each identity. We transfer what we learn from the synthetic data to the real data using domain adaptation in a completely unsupervised way and report classification accuracies over 75% for a binary classification task that takes place in a scene with unknown geometry and occluding objects. Next, we focus on the problem of scene estimation, which aims to recover an image of the hidden scene from NLOS measurements. We present a learning-based framework that exploits deep generative models and demonstrate the promise of this framework via simulations.S.M

    Identity-Expression Ambiguity in 3D Morphable Face Models

    No full text
    corecore