131 research outputs found

    Ranking with large margin principle: Two approaches

    Get PDF
    We discuss the problem of ranking instances with the use of a “large margin ” principle. We introduce two main approaches: the first is the “fixed margin ” policy in which the margin of the closest neighboring classes is being maximized — which turns out to be a direct generalization of SVM to ranking learning. The second approach allows for different margins where the sum of margins is maximized. This approach is shown to reduce to-SVM when the number of classes. Both approaches are optimal in size of where is the total number of training examples. Experiments performed on visual classification and “collaborative filtering ” show that both approaches outperform existing ordinal regression algorithms applied for ranking and multi-class SVM applied to general multi-class classification.

    Non-invasive and noise-robust light focusing using confocal wavefront shaping

    Full text link
    Wavefront-shaping is a promising approach for imaging fluorescent targets deep inside scattering tissue despite strong aberrations. It enables focusing an incoming illumination into a single spot inside tissue, as well as correcting the outgoing light scattered from the tissue, by modulating the incoming and/or outgoing wavefronts. Previously, wavefront shaping modulations have been successively estimated using feedback from strong fluorescent beads, which have been manually added to a sample. However, ideally, such feedback should be provided by the fluorescent components of the tissue itself, whose emission is orders of magnitude weaker than the one provided by beads. When a low number of photons is spread over multiple sensor pixels, the image is highly susceptible to noise, and the feedback signal required for previous algorithms cannot be detected. In this work, we suggest a wavefront shaping approach that works with a confocal modulation of both the illumination and imaging arms. Since the aberrations are corrected in the optics before the detector, the low photon budget can be directed into a single sensor spot and detected with high SNR. We derive a score function for modulation evaluation from mathematical principles, and successfully use it to image EGFP labeled neurons, despite scattering through thick tissue.Comment: 9 pages, 5 figures + 8 pages, 6 figures of supplementar

    4D Frequency Analysis of Computational Cameras for Depth of Field Extension

    Get PDF
    Depth of field (DOF), the range of scene depths that appear sharp in a photograph, poses a fundamental tradeoff in photography---wide apertures are important to reduce imaging noise, but they also increase defocus blur. Recent advances in computational imaging modify the acquisition process to extend the DOF through deconvolution. Because deconvolution quality is a tight function of the frequency power spectrum of the defocus kernel, designs with high spectra are desirable. In this paper we study how to design effective extended-DOF systems, and show an upper bound on the maximal power spectrum that can be achieved. We analyze defocus kernels in the 4D light field space and show that in the frequency domain, only a low-dimensional 3D manifold contributes to focus. Thus, to maximize the defocus spectrum, imaging systems should concentrate their limited energy on this manifold. We review several computational imaging systems and show either that they spend energy outside the focal manifold or do not achieve a high spectrum over the DOF. Guided by this analysis we introduce the lattice-focal lens, which concentrates energy at the low-dimensional focal manifold and achieves a higher power spectrum than previous designs. We have built a prototype lattice-focal lens and present extended depth of field results

    Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision

    Get PDF
    Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinholeoptics. The development of computational imaging is broadening thisscope; a variety of unconventional cameras do not directly capture atraditional image anymore, but instead require the jointreconstruction of structure and image information. For example, recentcoded aperture designs have been optimized to facilitate the jointreconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied bydifferent strategies. This paper introduces a unified framework for analyzing computational imaging approaches.Each sensor element is modeled as an inner product over the 4D light field.The imaging task is then posed as Bayesian inference: giventhe observed noisy light field projections and a new prior on light field signals, estimate the original light field. Under common imaging conditions, we compare theperformance of various camera designs using 2D light field simulations. Thisframework allows us to better understand the tradeoffs of each camera type and analyze their limitations

    Passive Micron-scale Time-of-Flight with Sunlight Interferometry

    Full text link
    We introduce an interferometric technique for passive time-of-flight imaging and depth sensing at micrometer axial resolutions. Our technique uses a full-field Michelson interferometer, modified to use sunlight as the only light source. The large spectral bandwidth of sunlight makes it possible to acquire micrometer-resolution time-resolved scene responses, through a simple axial scanning operation. Additionally, the angular bandwidth of sunlight makes it possible to capture time-of-flight measurements insensitive to indirect illumination effects, such as interreflections and subsurface scattering. We build an experimental prototype that we operate outdoors, under direct sunlight, and in adverse environmental conditions such as mechanical vibrations and vehicle traffic. We use this prototype to demonstrate, for the first time, passive imaging capabilities such as micrometer-scale depth sensing robust to indirect illumination, direct-only imaging, and imaging through diffusers

    High Spatial Resolution BRDFs with Metallic powders Using Wave Optics Analysis

    Get PDF
    This manuscript completes the analysis of our SIGGRAPH 2013 paper "Fabricating BRDFs at High Spatial Resolution Using Wave Optics" in which photolithography fabrication was used for manipulating reflectance effects. While photolithography allows for precise reflectance control, it is costly to fabricate. Here we explore an inexpensive alternative to micro-fabrication, in the form of metallic powders. Such powders are readily available at a variety of particle sizes and morphologies. Using an analysis similar to the micro-fabrication paper, we provide guidelines for the relation between the particles' shape and size and the reflectance functions they can produce

    Understanding camera trade-offs through a Bayesian analysis of light field projections

    Get PDF
    Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but instead require the joint reconstruction of structure and image information. For example, recent coded aperture designs have been optimized to facilitate the joint reconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied by different strategies.This paper introduces a unified framework for analyzing computational imagingapproaches. Each sensor element is modeled as an inner product over the 4D light field. The imaging task is then posed as Bayesian inference: given the observed noisy light field projections and a new prior on light field signals, estimatethe original light field. Under common imaging conditions, we compare the performance of various camera designs using 2D light field simulations. This framework allows us to better understand the tradeoffs of each camera type andanalyze their limitations

    Understanding and evaluating blind deconvolution algorithms

    Get PDF
    Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. Recent algorithms have afforded dramatic progress, yet many aspects of the problem remain challenging and hard to understand.The goal of this paper is to analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. We explain the previously reported failure of the naive MAP approach by demonstrating that it mostly favors no-blur explanations. On the other hand we show that since the kernel size is often smaller than the image size a MAP estimation of the kernel alone can be well constrained and accurately recover the true blur. The plethora of recent deconvolution techniques makes an experimental evaluation on ground-truth data important. We have collected blur data with ground truth and compared recent algorithms under equal settings. Additionally, our data demonstrates that the shift-invariant blur assumption made by most algorithms is often violated
    • …
    corecore