2,268 research outputs found

    A Novel Framework for Highlight Reflectance Transformation Imaging

    Get PDF
    We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa

    Photometric Variability of the mCP Star CS Vir: Evolution of the Rotation Period

    Full text link
    The aim of this study is to accurately calculate the rotational period of CS\,Vir by using {\sl STEREO} observations and investigate a possible period variation of the star with the help of all accessible data. The {\sl STEREO} data that cover five-year time interval between 2007 and 2011 are analyzed by means of the Lomb-Scargle and Phase Dispersion Minimization methods. In order to obtain a reliable rotation period and its error value, computational algorithms such as the Levenberg-Marquardt and Monte-Carlo simulation algorithms are applied to the data sets. Thus, the rotation period of CS\,Vir is improved to be 9.29572(12) days by using the five-year of combined data set. Also, the light elements are calculated as HJDmax=2 454 715.975(11)+9⋅d29572(12)×E+9⋅d78(1.13)×10−8×E2HJD_\mathrm{max} = 2\,454\,715.975(11) + 9_{\cdot}^\mathrm{d}29572(12) \times E + 9_{\cdot}^\mathrm{d}78(1.13) \times 10^{-8} \times E^2 by means of the extremum times derived from the {\sl STEREO} light curves and archives. Moreover, with this study, a period variation is revealed for the first time, and it is found that the period has lengthened by 0.66(8) s y−1^{-1}, equivalent to 66 seconds per century. Additionally, a time-scale for a possible spin-down is calculated around τSD∼106\tau_\mathrm{SD} \sim 10^6 yr. The differential rotation and magnetic braking are thought to be responsible of the mentioned rotational deceleration. It is deduced that the spin-down time-scale of the star is nearly three orders of magnitude shorter than its main-sequence lifetime (τMS∼109\tau_\mathrm{MS} \sim 10^9 yr). It is, in return, suggested that the process of increase in the period might be reversible.Comment: 11 pages, 5 tables, 3 figures, the paper has been accepted for publication in PAS

    Photometric Depth Super-Resolution

    Full text link
    This study explores the use of photometric techniques (shape-from-shading and uncalibrated photometric stereo) for upsampling the low-resolution depth map from an RGB-D sensor to the higher resolution of the companion RGB image. A single-shot variational approach is first put forward, which is effective as long as the target's reflectance is piecewise-constant. It is then shown that this dependency upon a specific reflectance model can be relaxed by focusing on a specific class of objects (e.g., faces), and delegate reflectance estimation to a deep neural network. A multi-shot strategy based on randomly varying lighting conditions is eventually discussed. It requires no training or prior on the reflectance, yet this comes at the price of a dedicated acquisition setup. Both quantitative and qualitative evaluations illustrate the effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2019. First three authors contribute equall

    Shape from Lambertian Photometric Flow Fields

    Get PDF
    A new idea for the analysis of shape from reflectance maps is introduced in this paper. It is shown that local surface orientation and curvature constraints can be obtained at points on a smooth surface by computing the instantaneous rate of change of reflected scene radiance caused by angular variations in illumination geometry. The resulting instantaneous changes in image irradiance values across an optic sensing array of pixels constitute what is termed a photometric flow field. Unlike optic flow fields which are instantaneous changes in position across an optic array of pixels caused by relative motion, there is no correspondence problem with respect to obtaining the instantaneous change in image irradiance values between successive image frames. This is because the object and camera remain static relative to one another as the illumination geometry changes. There are a number of advantages to using photometric flow fields. One advantage is that local surface orientation and curvature at a point on a smooth surface can be uniquely determined by only slightly varying the incident orientation of an illuminator within a small local neighborhood about a specific incident orientation. Robot manipulators and rotation/positioning jigs can be accurately varied within small ranges of motion. Conventional implementation of photometric stereo requires the use of three vastly different incident orientations of an illuminator requiring either much calibration and/or gross and inaccurate robot arm motions. Another advantage of using photometric flow fields is the duality that exists between determining unknown local surface orientation from a known incident illuminator orientation and determining an unknown incident illuminator orientation from a known local surface orientation. The equations for photometric flow fields allow the quantitative determination of the incident orientation of an illuminator from an object having a known calibrated surface orientation. Computer simulations will be shown depicting photometric flow fields on a Lambertian sphere. Simulations will be shown depicting how photometric flow fields quantitatively determine local surface orientation from a known incident orientation of an illuminator as well as determining incident illuminator orientation from a known local surface orientation

    PS-Transformer: Learning Sparse Photometric Stereo Network using Self-Attention Mechanism

    Full text link
    Existing deep calibrated photometric stereo networks basically aggregate observations under different lights based on the pre-defined operations such as linear projection and max pooling. While they are effective with the dense capture, simple first-order operations often fail to capture the high-order interactions among observations under small number of different lights. To tackle this issue, this paper presents a deep sparse calibrated photometric stereo network named {\it PS-Transformer} which leverages the learnable self-attention mechanism to properly capture the complex inter-image interactions. PS-Transformer builds upon the dual-branch design to explore both pixel-wise and image-wise features and individual feature is trained with the intermediate surface normal supervision to maximize geometric feasibility. A new synthetic dataset named CyclesPS+ is also presented with the comprehensive analysis to successfully train the photometric stereo networks. Extensive results on the publicly available benchmark datasets demonstrate that the surface normal prediction accuracy of the proposed method significantly outperforms other state-of-the-art algorithms with the same number of input images and is even comparable to that of dense algorithms which input 10×\times larger number of images.Comment: BMVC2021. Code and Supplementary are available at https://github.com/satoshi-ikehata/PS-Transformer-BMVC202
    • …
    corecore