4,917 research outputs found

    In-process surface profile assessment of rotary machined timber using a dynamic photometric stereo technique

    Get PDF
    Machining operations have advanced in speed and there is an increasing demand for higher quality surface finish. It is therefore necessary to develop real-time surface inspection techniques which will provide sensory information for controlling the machining processes. This paper describes a practical method for real-time analysis of planed wood using the photometric stereo technique. Earlier research has shown that the technique is very effective in assessing surface waviness on static wood samples. In this paper, the photometric stereo method is extended to real industrial applications where samples are subjected to rapid movements. Surface profiles extracted from the dynamic photometric stereo method are compared with those from the static measurements and the results show that there is a high correlation between the two methods

    Photometric reconstruction of a dynamic textured surface from just one color image acquisition

    No full text
    http://www.opticsinfobase.org/josaa/abstract.cfm?msid=85528 This article has been selected for inclusion in the Virtual Journal for Biomedical Optics (Vol. 3, Iss. 4)International audienceTextured surface analysis is essential for many applications. We present a three-dimensional recovery approach for real textured surfaces based on photometric stereo. The aim is to be able to measure the textured surfaces with a high degree of accuracy. For this, we use a color digital sensor and principles of color photometric stereo. This method uses a single color image, instead of a sequence of gray-scale images, to recover the surface of the three dimensions. It can thus be integrated into dynamic systems where there is significant relative motion between the object and the camera. To evaluate the performances of our method, we compare it on real textured surfaces to traditional photometric stereo using three images. We show thus that it is possible to have similar results with just one color image

    Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments

    Full text link
    We present a self-supervised approach to ignoring "distractors" in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments. We leverage offline multi-session mapping approaches to automatically generate a per-pixel ephemerality mask and depth map for each input image, which we use to train a deep convolutional network. At run-time we use the predicted ephemerality and depth as an input to a monocular visual odometry (VO) pipeline, using either sparse features or dense photometric matching. Our approach yields metric-scale VO using only a single camera and can recover the correct egomotion even when 90% of the image is obscured by dynamic, independently moving objects. We evaluate our robust VO methods on more than 400km of driving from the Oxford RobotCar Dataset and demonstrate reduced odometry drift and significantly improved egomotion estimation in the presence of large moving vehicles in urban traffic.Comment: International Conference on Robotics and Automation (ICRA), 2018. Video summary: http://youtu.be/ebIrBn_nc-

    Event Fusion Photometric Stereo Network

    Full text link
    We present a novel method to estimate the surface normal of an object in an ambient light environment using RGB and event cameras. Modern photometric stereo methods rely on an RGB camera, mainly in a dark room, to avoid ambient illumination. To alleviate the limitations of the darkroom environment and to use essential light information, we employ an event camera with a high dynamic range and low latency. This is the first study that uses an event camera for the photometric stereo task, which works on continuous light sources and ambient light environment. In this work, we also curate a novel photometric stereo dataset that is constructed by capturing objects with event and RGB cameras under numerous ambient lights environment. Additionally, we propose a novel framework named Event Fusion Photometric Stereo Network~(EFPS-Net), which estimates the surface normals of an object using both RGB frames and event signals. Our proposed method interpolates event observation maps that generate light information with sparse event signals to acquire fluent light information. Subsequently, the event-interpolated observation maps are fused with the RGB observation maps. Our numerous experiments showed that EFPS-Net outperforms state-of-the-art methods on a dataset captured in the real world where ambient lights exist. Consequently, we demonstrate that incorporating additional modalities with EFPS-Net alleviates the limitations that occurred from ambient illumination.Comment: 33 pages, 11 figure

    Dynamic shape capture using multi-view photometric stereo

    Full text link
    • …
    corecore