3,514 research outputs found

    Simulation of superresolution holography for optical tweezers

    Get PDF
    Optical tweezers manipulate microscopic particles using foci of light beams. Their performance is therefore limited by diffraction. Using computer simulations of a model system, we investigate the application of superresolution holography for two-dimensional (2D) light shaping in optical tweezers, which can beat the diffraction limit. We use the direct-search and Gerchberg algorithms to shape the center of a light beam into one or two bright spots; we do not constrain the remainder of the beam. We demonstrate that superresolution algorithms can significantly improve the normalized stiffness of an optical trap and the minimum separation at which neighboring traps can be resolved. We also test if such algorithms can be used interactively, as is desirable in optical tweezers

    Depth Superresolution using Motion Adaptive Regularization

    Full text link
    Spatial resolution of depth sensors is often significantly lower compared to that of conventional optical cameras. Recent work has explored the idea of improving the resolution of depth using higher resolution intensity as a side information. In this paper, we demonstrate that further incorporating temporal information in videos can significantly improve the results. In particular, we propose a novel approach that improves depth resolution, exploiting the space-time redundancy in the depth and intensity using motion-adaptive low-rank regularization. Experiments confirm that the proposed approach substantially improves the quality of the estimated high-resolution depth. Our approach can be a first component in systems using vision techniques that rely on high resolution depth information

    A Compressive Multi-Mode Superresolution Display

    Get PDF
    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image.Comment: Technical repor

    Geometry-Aware Neighborhood Search for Learning Local Models for Image Reconstruction

    Get PDF
    Local learning of sparse image models has proven to be very effective to solve inverse problems in many computer vision applications. To learn such models, the data samples are often clustered using the K-means algorithm with the Euclidean distance as a dissimilarity metric. However, the Euclidean distance may not always be a good dissimilarity measure for comparing data samples lying on a manifold. In this paper, we propose two algorithms for determining a local subset of training samples from which a good local model can be computed for reconstructing a given input test sample, where we take into account the underlying geometry of the data. The first algorithm, called Adaptive Geometry-driven Nearest Neighbor search (AGNN), is an adaptive scheme which can be seen as an out-of-sample extension of the replicator graph clustering method for local model learning. The second method, called Geometry-driven Overlapping Clusters (GOC), is a less complex nonadaptive alternative for training subset selection. The proposed AGNN and GOC methods are evaluated in image super-resolution, deblurring and denoising applications and shown to outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings.Comment: 15 pages, 10 figures and 5 table

    The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

    Full text link
    While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.Comment: Accepted to CVPR 2018; Code and data available at https://www.github.com/richzhang/PerceptualSimilarit

    Multiple feature-enhanced SAR imaging using sparsity in combined dictionaries

    Get PDF
    Nonquadratic regularization-based image formation is a recently proposed framework for feature-enhanced radar imaging. Specific image formation techniques in this framework have so far focused on enhancing one type of feature, such as strong point scatterers, or smooth regions. However, many scenes contain a number of such feature types. We develop an image formation technique that simultaneously enhances multiple types of features by posing the problem as one of sparse representation based on combined dictionaries. This method is developed based on the sparse representation of the magnitude of the scattered complex-valued field, composed of appropriate dictionaries associated with different types of features. The multiple feature-enhanced reconstructed image is then obtained through a joint optimization problem over the combined representation of the magnitude and the phase of the underlying field reflectivities
    • 

    corecore