4,139 research outputs found

    Interactive Illumination Invariance

    Get PDF
    Illumination effects cause problems for many computer vision algorithms. We present a user-friendly interactive system for robust illumination-invariant image generation. Compared with the previous automated illumination-invariant image derivation approaches, our system enables users to specify a particular kind of illumination variation for removal. The derivation of illumination-invariant image is guided by the user input. The input is a stroke that defines an area covering a set of pixels whose intensities are influenced predominately by the illumination variation. This additional flexibility enhances the robustness for processing non-linearly rendered images and the images of the scenes where their illumination variations are difficult to estimate automatically. Finally, we present some evaluation results of our method

    Plant image retrieval using color, shape and texture features

    Get PDF
    We present a content-based image retrieval system for plant image retrieval, intended especially for the house plant identification problem. A plant image consists of a collection of overlapping leaves and possibly flowers, which makes the problem challenging.We studied the suitability of various well-known color, shape and texture features for this problem, as well as introducing some new texture matching techniques and shape features. Feature extraction is applied after segmenting the plant region from the background using the max-flow min-cut technique. Results on a database of 380 plant images belonging to 78 different types of plants show promise of the proposed new techniques and the overall system: in 55% of the queries, the correct plant image is retrieved among the top-15 results. Furthermore, the accuracy goes up to 73% when a 132-image subset of well-segmented plant images are considered

    Joint Learning of Intrinsic Images and Semantic Segmentation

    Get PDF
    Semantic segmentation of outdoor scenes is problematic when there are variations in imaging conditions. It is known that albedo (reflectance) is invariant to all kinds of illumination effects. Thus, using reflectance images for semantic segmentation task can be favorable. Additionally, not only segmentation may benefit from reflectance, but also segmentation may be useful for reflectance computation. Therefore, in this paper, the tasks of semantic segmentation and intrinsic image decomposition are considered as a combined process by exploring their mutual relationship in a joint fashion. To that end, we propose a supervised end-to-end CNN architecture to jointly learn intrinsic image decomposition and semantic segmentation. We analyze the gains of addressing those two problems jointly. Moreover, new cascade CNN architectures for intrinsic-for-segmentation and segmentation-for-intrinsic are proposed as single tasks. Furthermore, a dataset of 35K synthetic images of natural environments is created with corresponding albedo and shading (intrinsics), as well as semantic labels (segmentation) assigned to each object/scene. The experiments show that joint learning of intrinsic image decomposition and semantic segmentation is beneficial for both tasks for natural scenes. Dataset and models are available at: https://ivi.fnwi.uva.nl/cv/intrinsegComment: ECCV 201

    Exploration of a Polarized Surface Bidirectional Reflectance Model Using the Ground-Based Multiangle Spectropolarimetric Imager

    Get PDF
    Accurate characterization of surface reflection is essential for retrieval of aerosols using downward-looking remote sensors. In this paper, observations from the Ground-based Multiangle SpectroPolarimetric Imager (GroundMSPI) are used to evaluate a surface polarized bidirectional reflectance distribution function (PBRDF) model. GroundMSPI is an eight-band spectropolarimetric camera mounted on a rotating gimbal to acquire pushbroom imagery of outdoor landscapes. The camera uses a very accurate photoelastic-modulator-based polarimetric imaging technique to acquire Stokes vector measurements in three of the instrument's bands (470, 660, and 865 nm). A description of the instrument is presented, and observations of selected targets within a scene acquired on 6 January 2010 are analyzed. Data collected during the course of the day as the Sun moved across the sky provided a range of illumination geometries that facilitated evaluation of the surface model, which is comprised of a volumetric reflection term represented by the modified Rahman-Pinty-Verstraete function plus a specular reflection term generated by a randomly oriented array of Fresnel-reflecting microfacets. While the model is fairly successful in predicting the polarized reflection from two grass targets in the scene, it does a poorer job for two manmade targets (a parking lot and a truck roof), possibly due to their greater degree of geometric organization. Several empirical adjustments to the model are explored and lead to improved fits to the data. For all targets, the data support the notion of spectral invariance in the angular shape of the unpolarized and polarized surface reflection. As noted by others, this behavior provides valuable constraints on the aerosol retrieval problem, and highlights the importance of multiangle observations.NASAJPLCenter for Space Researc

    The effects of belongingness on the Simultaneous Lightness Contrast: A virtual reality study

    Get PDF
    Simultaneous Lightness Contrast (SLC) is the phenomenon whereby a grey patch on a dark background appears lighter than an equal patch on a light background. Interestingly, the lightness difference between these patches undergoes substantial augmentation when the two backgrounds are patterned, thereby forming the articulated-SLC display. There are two main interpretations of these phenomena: The midlevel interpretation maintains that the visual system groups the luminance within a set of contiguous frameworks, whilst the high-level one claims that the visual system splits the luminance into separate overlapping layers corresponding to separate physical contributions. This research aimed to test these two interpretations by systematically manipulating the viewing distance and the horizontal distance between the backgrounds of both the articulated and plain SLC displays. An immersive 3D Virtual Reality system was employed to reproduce identical alignment and distances, as well as isolating participants from interfering luminance. Results showed that reducing the viewing distance resulted in increased contrast in both the plain- and articulated-SLC displays and that, increasing the horizontal distance between the backgrounds resulted in decreased contrast in the articulated condition but increased contrast in the plain condition. These results suggest that a comprehensive lightness theory should combine the two interpretations
    corecore