24 research outputs found

    Programmable Imaging: Towards a Flexible Camera

    Full text link

    Robust and accurate online pose estimation algorithm via efficient three‐dimensional collinearity model

    Full text link
    In this study, the authors propose a robust and high accurate pose estimation algorithm to solve the perspective‐N‐point problem in real time. This algorithm does away with the distinction between coplanar and non‐coplanar point configurations, and provides a unified formulation for the configurations. Based on the inverse projection ray, an efficient collinearity model in object–space is proposed as the cost function. The principle depth and the relative depth of reference points are introduced to remove the residual error of the cost function and to improve the robustness and the accuracy of the authors pose estimation method. The authors solve the pose information and the depth of the points iteratively by minimising the cost function, and then reconstruct their coordinates in camera coordinate system. In the following, the optimal absolute orientation solution gives the relative pose information between the estimated three‐dimensional (3D) point set and the 3D mode point set. This procedure with the above two steps is repeated until the result converges. The experimental results on simulated and real data show that the superior performance of the proposed algorithm: its accuracy is higher than the state‐of‐the‐art algorithms, and has best anti‐noise property and least deviation by the influence of outlier among the tested algorithms

    Pharmacognostical Sources of Popular Medicine To Treat Alzheimer’s Disease

    Get PDF

    Exploiting Image Collections for Recovering Photometric Properties

    Get PDF
    Abstract. We address the problem of jointly estimating the scene illumination, the radiometric camera calibration and the reflectance properties of an object using a set of images from a community photo collection. The highly ill-posed nature of this problem is circumvented by using appropriate representations of illumination, an empirical model for the nonlinear function that relates image irradiance with intensity values and additional assumptions on the surface reflectance properties. Using a 3D model recovered from an unstructured set of images, we estimate the coefficients that represent the illumination for each image using a frequency framework. For each image, we also compute the corresponding camera response function. Additionally, we calculate a simple model for the reflectance properties of the 3D model. A robust non-linear optimization is proposed exploiting the high sparsity present in the problem

    A visual perception approach for accurate segmentation of light profiles

    No full text
    In this paper we describe the first industrial prototype to characterize automatically the headlamp beam properties using computer vision. The European commission for transportation provides strict regulations that have to be fulfilled as far as headlamp orientations, luminous and geometrical beam properties are concerned. To test the headlamps, the test system has to be properly aligned to the vehicle in order that the measures achieved on brightness and geometrical beam profile can be reliable. The system we present is composed of two integral subsystems. The first consists of a fixed stereo vision system capable of estimating automatically, in real time and with a very high accuracy, the longitudinal axis of the vehicle while it is approaching the stereo rig. The outcome is used to accurately align the second subsystem with respect to the vehicle. This subsystem is composed of a classic optical projection endowed with a CCD camera used to perform automatically radiometric and geometric assessments of the beam projected by the headlamps. Experiments carried out for both the subsystems prove how the high accuracy achieved by our method makes the prototype compliant with current regulations. It is worth remarking that the technology employed is low cost, thus making our approach suitable for commercial headlight tester

    Ghost-free high dynamic range imaging

    No full text
    Abstract. Most high dynamic range image (HDRI) algorithms assume stationary scene for registering multiple images which are taken under differentexposuresettings. Inpractice, however, therecanbesome global or local movements between images caused by either camera or object motions. This situation usually causes ghost artifacts which make the same object appear multiple times in the resultant HDRI. To solve this problem, most conventional algorithms conduct ghost detection procedures followed by ghost region filling with the estimated radiance values. However, usually these methods largely depend on the accuracy of the ghost detection results, and thus often suffer from color artifacts around the ghost regions. In this paper, we propose a new robust ghost-free HDRI generation algorithm that does not require accurate ghost detection and not suffer from the color artifact problem. To deal with the ghost problem, our algorithm utilizes the global intensity transfer functions obtained from joint probability density functions (pdfs) between different exposure images. Then, to estimate reliable radiance values, we employ a generalized weighted filtering technique using the global intensity transfer functions. Experimental results show that our method produces the state-of-the-art performance in generating ghost-free HDR images.

    A Theoretical Analysis of Camera Response Functions in Image Deblurring

    No full text
    Abstract. Motion deblurring is a long standing problem in computer vision and image processing. In most previous approaches, the blurred image is modeled as the convolution of a latent intensity image with a blur kernel. However, for images captured by a real camera, the blur convolution should be applied to scene irradiance instead of image inten-sity and the blurred results need to be mapped back to image intensity via the camera’s response function (CRF). In this paper, we present a comprehensive study to analyze the effects of CRFs on motion deblur-ring. We prove that the intensity-based model closely approximates the irradiance model at low frequency regions. However, at high frequen-cy regions such as edges, the intensity-based approximation introduces large errors and directly applying deconvolution on the intensity image will produce strong ringing artifacts even if the blur kernel is invertible. Based on the approximation error analysis, we further develop a dual-image based solution that captures a pair of sharp/blurred images for both CRF estimation and motion deblurring. Experiments on synthetic and real images validate our theories and demonstrate the robustness and accuracy of our approach.

    Learning Brightness Transfer Functions for the Joint Recovery of Illumination Changes and Optical Flow

    No full text
    The increasing importance of outdoor applications such as driver assistance systems or video surveillance tasks has recently triggered the development of optical flow methods that aim at performing robustly under uncontrolled illumination. Most of these methods are based on patch-based features such as the normalized cross correlation, the census transform or the rank transform. They achieve their robustness by locally discarding both absolute brightness and contrast. In this paper, we follow an alternative strategy: Instead of discarding potentially important image information, we propose a novel variational model that jointly estimates both illumination changes and optical flow. The key idea is to parametrize the illumination changes in terms of basis functions that are learned from training data. While such basis functions allow for a meaningful representation of illumination effects, they also help to distinguish real illumination changes from motion-induced brightness variations if supplemented by additional smoothness constraints. Experiments on the KITTI benchmark show the clear benefits of our approach. They do not only demonstrate that it is possible to obtain meaningful basis functions, they also show state-of-the-art results for robust optical flow estimation. Document type: Part of book or chapter of boo
    corecore