12,371 research outputs found

    Impossible shadows and lightness constancy

    Get PDF
    The intersection between an illumination and a reflectance edge is characterised by the `ratio-invariant' property, that is the luminance ratio of the regions under different illumination remains the same. In a CRT experiment, we shaped two areas, one surrounding the other, and simulated an illumination edge dividing them in two frames of illumination. The portion of the illumina- tion edge standing on the surrounding area (labelled contextual background) was the contextual edge, while the portion standing on the enclosed area (labelled mediating background) was the mediating edge. On the mediating background, there were two patches, one per illumination frame. Observers were asked to adjust the luminance of the patch in bright illumination to equate the lightness of the other. We compared conditions in which the luminance ratio at the contextual edge could be (i) equal (possible shadow), or (ii) larger (impossible shadow) than that at the mediating edge. In addition, we manipulated the reflectance of the backgrounds. It could be higher for the contextual than for the mediating background; or, vice versa, lower for the contextual than for the mediating background. Results reveal that lightness constancy significantly increases when: (i) the luminance ratio at the contextual edge is larger than that at the mediating edge creating an impossible shadow, and (ii) the reflectance of the contextual background is lower than that of the mediating one. We interpret our results according to the albedo hypothesis, and suggest that the scission process is facilitated when the luminance ratio at the contextual edge is larger than that at the mediating edge and/or the reflectance of the including area is lower than that of the included one. This occurs even if the ratio-invariant property is violated

    Restoration of Oyster (Crassostrea virginica) Habitat for Multiple Estuarine Species Benefits

    Get PDF
    Increase in nitrogen concentration and declining eelgrass beds in Great Bay Estuary have been observed in the last decades. These two parameters are clear indicators of the impending problems for NH’s estuaries. The NH Department of Environmental Services (DES) in collaboration with the New Hampshire Estuaries Project (NHEP) adopted the assumption that eelgrass survival can be used as the water quality target for nutrient criteria development for NH’s estuaries. One of the hypotheses put forward regarding eelgrass decline is that a possible eutrophication response to nutrient increases in the Great Bay Estuary has been the proliferation of nuisance macroalgae, which has reduced eelgrass area in Great Bay Estuary. To test this hypothesis, mapping of eelgrass and nuisance macroalgae beds using hyperspectral imagery was suggested. A hyperspectral imagery was conducted by SpecTIR in August 2007 using an AISA Eagle sensor. The collected dataset was used to map eelgrass and nuisance macroalgae throughout the Great Bay Estuary. This report outlines the configured procedure for mapping the macroalgae and eelgrass beds using hyperspectral imagery. No ground truth measurements of eelgrass or macroalgae were collected as part of this project, although eelgrass ground truth data was collected as part of a separate project. Guidance from eelgrass and macroalgae experts was used for identifying training sets and evaluating the classification results. The results produced a comprehensive eelgrass and macroalgae map of the estuary. Three recommendations are suggested following the experience gained in this study: conducting ground truth measurements at the time of the HS survey, acquiring the current DEM model of Great Bay Estuary, and examining additional HS datasets with expert eelgrass and macroalgae guidance. These three issues can improve the classification results and allow more advanced applications, such as identification of macroalgae types

    Macroalgae and eelgrass mapping in Great Bay Estuary using AISA hyperspectral imagery

    Get PDF
    Increase in nitrogen concentration and declining eelgrass beds in Great Bay Estuary have been observed in the last decades. These two parameters are clear indicators of the impending problems for NH’s estuaries. The NH Department of Environmental Services (DES) in collaboration with the New Hampshire Estuaries Project (NHEP) adopted the assumption that eelgrass survival can be used as the water quality target for nutrient criteria development for NH’s estuaries. One of the hypotheses put forward regarding eelgrass decline is that a possible eutrophication response to nutrient increases in the Great Bay Estuary has been the proliferation of nuisance macroalgae, which has reduced eelgrass area in Great Bay Estuary. To test this hypothesis, mapping of eelgrass and nuisance macroalgae beds using hyperspectral imagery was suggested. A hyperspectral imagery was conducted by SpecTIR in August 2007 using an AISA Eagle sensor. The collected dataset was used to map eelgrass and nuisance macroalgae throughout the Great Bay Estuary. This report outlines the configured procedure for mapping the macroalgae and eelgrass beds using hyperspectral imagery. No ground truth measurements of eelgrass or macroalgae were collected as part of this project, although eelgrass ground truth data was collected as part of a separate project. Guidance from eelgrass and macroalgae experts was used for identifying training sets and evaluating the classification results. The results produced a comprehensive eelgrass and macroalgae map of the estuary. Three recommendations are suggested following the experience gained in this study: conducting ground truth measurements at the time of the HS survey, acquiring the current DEM model of Great Bay Estuary, and examining additional HS datasets with expert eelgrass and macroalgae guidance. These three issues can improve the classification results and allow more advanced applications, such as identification of macroalgae types

    Joint Learning of Intrinsic Images and Semantic Segmentation

    Get PDF
    Semantic segmentation of outdoor scenes is problematic when there are variations in imaging conditions. It is known that albedo (reflectance) is invariant to all kinds of illumination effects. Thus, using reflectance images for semantic segmentation task can be favorable. Additionally, not only segmentation may benefit from reflectance, but also segmentation may be useful for reflectance computation. Therefore, in this paper, the tasks of semantic segmentation and intrinsic image decomposition are considered as a combined process by exploring their mutual relationship in a joint fashion. To that end, we propose a supervised end-to-end CNN architecture to jointly learn intrinsic image decomposition and semantic segmentation. We analyze the gains of addressing those two problems jointly. Moreover, new cascade CNN architectures for intrinsic-for-segmentation and segmentation-for-intrinsic are proposed as single tasks. Furthermore, a dataset of 35K synthetic images of natural environments is created with corresponding albedo and shading (intrinsics), as well as semantic labels (segmentation) assigned to each object/scene. The experiments show that joint learning of intrinsic image decomposition and semantic segmentation is beneficial for both tasks for natural scenes. Dataset and models are available at: https://ivi.fnwi.uva.nl/cv/intrinsegComment: ECCV 201

    Reflectance Adaptive Filtering Improves Intrinsic Image Estimation

    Full text link
    Separating an image into reflectance and shading layers poses a challenge for learning approaches because no large corpus of precise and realistic ground truth decompositions exists. The Intrinsic Images in the Wild~(IIW) dataset provides a sparse set of relative human reflectance judgments, which serves as a standard benchmark for intrinsic images. A number of methods use IIW to learn statistical dependencies between the images and their reflectance layer. Although learning plays an important role for high performance, we show that a standard signal processing technique achieves performance on par with current state-of-the-art. We propose a loss function for CNN learning of dense reflectance predictions. Our results show a simple pixel-wise decision, without any context or prior knowledge, is sufficient to provide a strong baseline on IIW. This sets a competitive baseline which only two other approaches surpass. We then develop a joint bilateral filtering method that implements strong prior knowledge about reflectance constancy. This filtering operation can be applied to any intrinsic image algorithm and we improve several previous results achieving a new state-of-the-art on IIW. Our findings suggest that the effect of learning-based approaches may have been over-estimated so far. Explicit prior knowledge is still at least as important to obtain high performance in intrinsic image decompositions.Comment: CVPR 201

    CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition

    Get PDF
    Most of the traditional work on intrinsic image decomposition rely on deriving priors about scene characteristics. On the other hand, recent research use deep learning models as in-and-out black box and do not consider the well-established, traditional image formation process as the basis of their intrinsic learning process. As a consequence, although current deep learning approaches show superior performance when considering quantitative benchmark results, traditional approaches are still dominant in achieving high qualitative results. In this paper, the aim is to exploit the best of the two worlds. A method is proposed that (1) is empowered by deep learning capabilities, (2) considers a physics-based reflection model to steer the learning process, and (3) exploits the traditional approach to obtain intrinsic images by exploiting reflectance and shading gradient information. The proposed model is fast to compute and allows for the integration of all intrinsic components. To train the new model, an object centered large-scale datasets with intrinsic ground-truth images are created. The evaluation results demonstrate that the new model outperforms existing methods. Visual inspection shows that the image formation loss function augments color reproduction and the use of gradient information produces sharper edges. Datasets, models and higher resolution images are available at https://ivi.fnwi.uva.nl/cv/retinet.Comment: CVPR 201

    On Recognizing Transparent Objects in Domestic Environments Using Fusion of Multiple Sensor Modalities

    Full text link
    Current object recognition methods fail on object sets that include both diffuse, reflective and transparent materials, although they are very common in domestic scenarios. We show that a combination of cues from multiple sensor modalities, including specular reflectance and unavailable depth information, allows us to capture a larger subset of household objects by extending a state of the art object recognition method. This leads to a significant increase in robustness of recognition over a larger set of commonly used objects.Comment: 12 page

    A Novel Framework for Highlight Reflectance Transformation Imaging

    Get PDF
    We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa
    • …
    corecore