38,138 research outputs found

    Effects of lighting on the perception of facial surfaces

    Get PDF
    The problem of variable illumination for object constancy has been largely neglected by "edge-based" theories of object recognition. However, there is evidence that edge-based schemes may not be sufficient for face processing and that shading information may be necessary (Bruce. 1988). Changes in lighting affect the pattern of shading on any three-dimensional object and the aim of this thesis was to investigate the effects of lighting on tasks involving face perception. Effects of lighting are first reported on the perception of the hollow face illusion (Gregory, 1973). The impression of a convex face was found to be stronger when light appeared to be from above, consistent with the importance of shape-from- shading which is thought to incorporate a light-from-above assumption. There was an independent main effect of orientation with the illusion stronger when the face was upright. This confirmed that object knowledge was important in generating the illusion, a conclusion which was confirmed by comparison with a "hollow potato" illusion. There was an effect of light on the inverted face suggesting that the direction of light may generally affect the interpretation of surfaces as convex or concave. It was also argued that there appears to be a general preference for convex interpretations of patterns of shading. The illusion was also found to be stronger when viewed monocularly and this effect was also independent of orientation. This was consistent with the processing of shape information by independent modules with object knowledge acting as a further constraint on the final interpretation. Effects of lighting were next reported on the recognition of shaded representations of facial surfaces, with top lighting facilitating processing. The adverse effects of bottom lighting on the interpretation of facial shape appear to affect within category as well as between category discriminations. Photographic negation was also found to affect recognition performance and it was suggested that its effects may be complimentary to those of bottom lighting in some respects. These effects were reported to be dependent on view. The last set of experiments investigated the effects of lighting and view on a simultaneous face matching task using the same surface representations which required subjects to decide if two images were of the same or different people. Subjects were found to be as much affected by a change in lighting as a change in view, which seems inconsistent with edge-based accounts. Top lighting was also found to facilitate matches across changes in view. When the stimuli were inverted matches across changes in both view and light were poorer, although image differences were the same. In other experiments subjects were found to match better across changes between two directions of top lighting than between directions of bottom lighting, although the extent of the changes were the same, suggesting the importance of top lighting for lighting as well as view invariance. Inverting the stimuli, which also inverts the lighting relative to the observer, disrupted matching across directions of top lighting but facilitated matching between levels of bottom lighting, consistent with the use of shading information. Changes in size were not found to affect matching showing that the effect of lighting was not only because it changes image properties. The effect of lighting was also found to transfer to digitised photographs showing that it was not an artifact of the materials. Lastly effects of lighting were reported when images were presented sequentially showing that the effect was not an artifact of simultaneous presentation. In the final section the effects reported were considered within the framework of theories of object recognition and argued to be inconsistent with invariant features, edge-based or alignment approaches. An alternative scheme employing surface-based primitives derived from shape-from-shuding was developed to account for the pattern of effects and contrasted with an image-based accoun

    Object recognition using shape-from-shading

    Get PDF
    This paper investigates whether surface topography information extracted from intensity images using a recently reported shape-from-shading (SFS) algorithm can be used for the purposes of 3D object recognition. We consider how curvature and shape-index information delivered by this algorithm can be used to recognize objects based on their surface topography. We explore two contrasting object recognition strategies. The first of these is based on a low-level attribute summary and uses histograms of curvature and orientation measurements. The second approach is based on the structural arrangement of constant shape-index maximal patches and their associated region attributes. We show that region curvedness and a string ordering of the regions according to size provides recognition accuracy of about 96 percent. By polling various recognition schemes. including a graph matching method. we show that a recognition rate of 98-99 percent is achievable

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts

    No full text
    This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies

    Learning single-image 3D reconstruction by generative modelling of shape, pose and shading

    Get PDF
    We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that: (i) it learns to disentangle shape from pose and lighting; (ii) using shading in the loss improves performance compared to just silhouettes; (iii) when using a standard single white light, our model outperforms state-of-the-art 2D-supervised methods, both with and without pose supervision, thanks to exploiting shading cues; (iv) performance improves further when using multiple coloured lights, even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced by our model capture smooth surfaces and fine details better than voxel-based approaches; and (vi) our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn.Comment: Extension of arXiv:1807.09259, accepted to IJCV. Differentiable renderer available at https://github.com/pmh47/dir

    A perceptually validated model for surface depth hallucination

    Get PDF
    Capturing detailed surface geometry currently requires specialized equipment such as laser range scanners, which despite their high accuracy, leave gaps in the surfaces that must be reconciled with photographic capture for relighting applications. Using only a standard digital camera and a single view, we present a method for recovering models of predominantly diffuse textured surfaces that can be plausibly relit and viewed from any angle under any illumination. Our multiscale shape-from-shading technique uses diffuse-lit/flash-lit image pairs to produce an albedo map and textured height field. Using two lighting conditions enables us to subtract one from the other to estimate albedo. In the absence of a flash-lit image of a surface for which we already have a similar exemplar pair, we approximate both albedo and diffuse shading images using histogram matching. Our depth estimation is based on local visibility. Unlike other depth-from-shading approaches, all operations are performed on the diffuse shading image in image space, and we impose no constant albedo restrictions. An experimental validation shows our method works for a broad range of textured surfaces, and viewers are frequently unable to identify our results as synthetic in a randomized presentation. Furthermore, in side-by-side comparisons, subjects found a rendering of our depth map equally plausible to one generated from a laser range scan. We see this method as a significant advance in acquiring surface detail for texturing using a standard digital camera, with applications in architecture, archaeological reconstruction, games and special effects. © 2008 ACM

    Color homography

    Get PDF
    We show the surprising result that colors across a change in viewing condition (changing light color, shading and camera) are related by a homography. Our homography color correction application delivers improved color fidelity compared with the linear least-square.Comment: Accepted by Progress in Colour Studies 201
    • …
    corecore