91 research outputs found

    Innovative optical non-contact measurement of respiratory function using photometric stereo

    Get PDF
    Pulmonary functional testing is very common and widely used in today's clinical environment for testing lung function. The contact based nature of a Spirometer can cause breathing awareness that alters the breathing pattern, affects the amount of air inhaled and exhaled and has hygiene implications. Spirometry also requires a high degree of compliance from the patient, as they have to breathe through a hand held mouth piece. To solve these issues a non-contact computer vision based system was developed for Pulmonary Functional Testing. This employs an improved photometric stereo method that was developed to recover local 3D surface orientation to enable calculation of breathing volumes. Although Photometric Stereo offers an attractive technique for acquiring 3D data using low-cost equipment, inherent limitations in the methodology have served to limit its practical application, particularly in measurement or metrology tasks. Traditional Photometric Stereo assumes that lighting directions at every pixel are the same, which is not usually the case in real applications and especially where the size of object being observed is comparable to the working distance. Such imperfections of the illumination may make the subsequent reconstruction procedures used to obtain the 3D shape of the scene, prone to low frequency geometric distortion and systematic error (bias). Also, the 3D reconstruction of the object results in a geometric shape with an unknown scale. To overcome these problems a novel method of estimating the distance of the object from the camera was developed, which employs Photometric Stereo images without using other additional imaging modality. The method firstly identifies the Lambertian Diffused Maxima regions to calculate the object's distance from the camera, from which the corrected per-pixel light vector is derived and the absolute dimensions of the object can be subsequently estimated. We also propose a new calibration process to allow a dynamic (as an object moves in the field of view) calculation of light vectors for each pixel with little additional computational cost. Experiments performed on synthetic as well as real data demonstrate that the proposed approach offers improved performance, achieving a reduction in the estimated surface normal error by up to 45% as well as the mean height error of reconstructed surface of up to 6 mm. In addition, compared with traditional photometric stereo, the proposed method reduces the mean angular and height error so that it is low, constant and independent of the position of the object placement within a normal working range. A high (0.98) correlation between breathing volume calculated from Photometric Stereo and Spirometer data was observed. This breathing volume is then converted to absolute amount of air by using distance information obtained by Lambertian Diffused Maxima Region. The unique and novel feature of this system is that it views the patients from both front and back and creates a 3D structure of the whole torso. By observing the 3D structure of the torso over time, the amount of air inhaled and exhaled can be estimated

    BxDF material acquisition, representation, and rendering for VR and design

    Get PDF
    Photorealistic and physically-based rendering of real-world environments with high fidelity materials is important to a range of applications, including special effects, architectural modelling, cultural heritage, computer games, automotive design, and virtual reality (VR). Our perception of the world depends on lighting and surface material characteristics, which determine how the light is reflected, scattered, and absorbed. In order to reproduce appearance, we must therefore understand all the ways objects interact with light, and the acquisition and representation of materials has thus been an important part of computer graphics from early days. Nevertheless, no material model nor acquisition setup is without limitations in terms of the variety of materials represented, and different approaches vary widely in terms of compatibility and ease of use. In this course, we describe the state of the art in material appearance acquisition and modelling, ranging from mathematical BSDFs to data-driven capture and representation of anisotropic materials, and volumetric/thread models for patterned fabrics. We further address the problem of material appearance constancy across different rendering platforms. We present two case studies in architectural and interior design. The first study demonstrates Yulio, a new platform for the creation, delivery, and visualization of acquired material models and reverse engineered cloth models in immersive VR experiences. The second study shows an end-to-end process of capture and data-driven BSDF representation using the physically-based Radiance system for lighting simulation and rendering

    Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D Light Field

    Full text link
    In this paper, we address the problem of simultaneous relighting and novel view synthesis of a complex scene from multi-view images with a limited number of light sources. We propose an analysis-synthesis approach called Relit-NeuLF. Following the recent neural 4D light field network (NeuLF), Relit-NeuLF first leverages a two-plane light field representation to parameterize each ray in a 4D coordinate system, enabling efficient learning and inference. Then, we recover the spatially-varying bidirectional reflectance distribution function (SVBRDF) of a 3D scene in a self-supervised manner. A DecomposeNet learns to map each ray to its SVBRDF components: albedo, normal, and roughness. Based on the decomposed BRDF components and conditioning light directions, a RenderNet learns to synthesize the color of the ray. To self-supervise the SVBRDF decomposition, we encourage the predicted ray color to be close to the physically-based rendering result using the microfacet model. Comprehensive experiments demonstrate that the proposed method is efficient and effective on both synthetic data and real-world human face data, and outperforms the state-of-the-art results. We publicly released our code on GitHub. You can find it here: https://github.com/oppo-us-research/RelitNeuLFComment: 10 page

    Acquisition, Modeling, and Augmentation of Reflectance for Synthetic Optical Flow Reference Data

    Get PDF
    This thesis is concerned with the acquisition, modeling, and augmentation of material reflectance to simulate high-fidelity synthetic data for computer vision tasks. The topic is covered in three chapters: I commence with exploring the upper limits of reflectance acquisition. I analyze state-of-the-art BTF reflectance field renderings and show that they can be applied to optical flow performance analysis with closely matching performance to real-world images. Next, I present two methods for fitting efficient BRDF reflectance models to measured BTF data. Both methods combined retain all relevant reflectance information as well as the surface normal details on a pixel level. I further show that the resulting synthesized images are suited for optical flow performance analysis, with a virtually identical performance for all material types. Finally, I present a novel method for augmenting real-world datasets with physically plausible precipitation effects, including ground surface wetting, water droplets on the windshield, and water spray and mists. This is achieved by projecting the realworld image data onto a reconstructed virtual scene, manipulating the scene and the surface reflectance, and performing unbiased light transport simulation of the precipitation effects
    • …
    corecore