5,102 research outputs found

    Highlighting objects of interest in an image by integrating saliency and depth

    Full text link
    Stereo images have been captured primarily for 3D reconstruction in the past. However, the depth information acquired from stereo can also be used along with saliency to highlight certain objects in a scene. This approach can be used to make still images more interesting to look at, and highlight objects of interest in the scene. We introduce this novel direction in this paper, and discuss the theoretical framework behind the approach. Even though we use depth from stereo in this work, our approach is applicable to depth data acquired from any sensor modality. Experimental results on both indoor and outdoor scenes demonstrate the benefits of our algorithm

    Aperture Supervision for Monocular Depth Estimation

    Full text link
    We present a novel method to train machine learning algorithms to estimate scene depths from a single image, by using the information provided by a camera's aperture as supervision. Prior works use a depth sensor's outputs or images of the same scene from alternate viewpoints as supervision, while our method instead uses images from the same viewpoint taken with a varying camera aperture. To enable learning algorithms to use aperture effects as supervision, we introduce two differentiable aperture rendering functions that use the input image and predicted depths to simulate the depth-of-field effects caused by real camera apertures. We train a monocular depth estimation network end-to-end to predict the scene depths that best explain these finite aperture images as defocus-blurred renderings of the input all-in-focus image.Comment: To appear at CVPR 2018 (updated to camera ready version

    Iris Recognition: Robust Processing, Synthesis, Performance Evaluation and Applications

    Get PDF
    The popularity of iris biometric has grown considerably over the past few years. It has resulted in the development of a large number of new iris processing and encoding algorithms. In this dissertation, we will discuss the following aspects of the iris recognition problem: iris image acquisition, iris quality, iris segmentation, iris encoding, performance enhancement and two novel applications.;The specific claimed novelties of this dissertation include: (1) a method to generate a large scale realistic database of iris images; (2) a crosspectral iris matching method for comparison of images in color range against images in Near-Infrared (NIR) range; (3) a method to evaluate iris image and video quality; (4) a robust quality-based iris segmentation method; (5) several approaches to enhance recognition performance and security of traditional iris encoding techniques; (6) a method to increase iris capture volume for acquisition of iris on the move from a distance and (7) a method to improve performance of biometric systems due to available soft data in the form of links and connections in a relevant social network

    A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation

    Full text link
    Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluating scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.Comment: Includes supplementary materia

    Automated Top View Registration of Broadcast Football Videos

    Full text link
    In this paper, we propose a novel method to register football broadcast video frames on the static top view model of the playing surface. The proposed method is fully automatic in contrast to the current state of the art which requires manual initialization of point correspondences between the image and the static model. Automatic registration using existing approaches has been difficult due to the lack of sufficient point correspondences. We investigate an alternate approach exploiting the edge information from the line markings on the field. We formulate the registration problem as a nearest neighbour search over a synthetically generated dictionary of edge map and homography pairs. The synthetic dictionary generation allows us to exhaustively cover a wide variety of camera angles and positions and reduce this problem to a minimal per-frame edge map matching procedure. We show that the per-frame results can be improved in videos using an optimization framework for temporal camera stabilization. We demonstrate the efficacy of our approach by presenting extensive results on a dataset collected from matches of football World Cup 2014
    • …
    corecore