2,524 research outputs found

    Ear-to-ear Capture of Facial Intrinsics

    Get PDF
    We present a practical approach to capturing ear-to-ear face models comprising both 3D meshes and intrinsic textures (i.e. diffuse and specular albedo). Our approach is a hybrid of geometric and photometric methods and requires no geometric calibration. Photometric measurements made in a lightstage are used to estimate view dependent high resolution normal maps. We overcome the problem of having a single photometric viewpoint by capturing in multiple poses. We use uncalibrated multiview stereo to estimate a coarse base mesh to which the photometric views are registered. We propose a novel approach to robustly stitching surface normal and intrinsic texture data into a seamless, complete and highly detailed face model. The resulting relightable models provide photorealistic renderings in any view

    Multispectral texture synthesis

    Get PDF
    Synthesizing texture involves the ordering of pixels in a 2D arrangement so as to display certain known spatial correlations, generally as described by a sample texture. In an abstract sense, these pixels could be gray-scale values, RGB color values, or entire spectral curves. The focus of this work is to develop a practical synthesis framework that maintains this abstract view while synthesizing texture with high spectral dimension, effectively achieving spectral invariance. The principle idea is to use a single monochrome texture synthesis step to capture the spatial information in a multispectral texture. The first step is to use a global color space transform to condense the spatial information in a sample texture into a principle luminance channel. Then, a monochrome texture synthesis step generates the corresponding principle band in the synthetic texture. This spatial information is then used to condition the generation of spectral information. A number of variants of this general approach are introduced. The first uses a multiresolution transform to decompose the spatial information in the principle band into an equivalent scale/space representation. This information is encapsulated into a set of low order statistical constraints that are used to iteratively coerce white noise into the desired texture. The residual spectral information is then generated using a non-parametric Markov Ran dom field model (MRF). The remaining variants use a non-parametric MRF to generate the spatial and spectral components simultaneously. In this ap proach, multispectral texture is grown from a seed region by sampling from the set of nearest neighbors in the sample texture as identified by a template matching procedure in the principle band. The effectiveness of both algorithms is demonstrated on a number of texture examples ranging from greyscale to RGB textures, as well as 16, 22, 32 and 63 band spectral images. In addition to the standard visual test that predominates the literature, effort is made to quantify the accuracy of the synthesis using informative and effective metrics. These include first and second order statistical comparisons as well as statistical divergence tests

    Intrusion and extrusion of water in hydrophobic mesopores

    Full text link
    We present experimental and theoretical results on intrusion-extrusion cycles of water in hydrophobic mesoporous materials, characterized by independent cylindrical pores. The intrusion, which takes place above the bulk saturation pressure, can be well described using a macroscopic capillary model. Once the material is saturated with water, extrusion takes place upon reduction of the externally applied pressure; Our results for the extrusion pressure can only be understood by assuming that the limiting extrusion mechanism is the nucleation of a vapour bubble inside the pores. A comparison of calculated and experimental nucleation pressures shows that a proper inclusion of line tension effects is necessary to account for the observed values of nucleation barriers. Negative line tensions of order 1011J.m110^{-11} \mathrm{J.m}^{-1} are found for our system, in reasonable agreement with other experimental estimates of this quantity

    STV-based Video Feature Processing for Action Recognition

    Get PDF
    In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end

    Visual Perception And Gestalt Grouping In The Landscape: Are Gestalt Grouping Principles Reliable Indicators Of Visual Preference?

    Get PDF
    Landscape visual preference research has indicated many potential indicators of preference; however a comprehensive framework concerning the relationship between visual preference and perception has not been solidified. Gestalt psychology, the predecessor to visual perception, proposes certain visual grouping tendencies to explain how humans perceive the world. This study examines if Gestalt grouping principles are reliable indicators of preference, and if they may be used to develop a broad context for visual assessment. Visual preference for 36 landscape scenes testing the proximity and similarity of landscape elements were ranked one through five by 1,749 Mississippi State University undergraduate, graduate, and faculty members in a web-based preference survey. Using a two-way between groups analysis of variance (ANOVA) to analyze responses, the results indicate that the proximal and similar configuration of landscape elements within a scene does significantly affect visual preference

    Visual Perception And Gestalt Grouping In The Landscape: Are Gestalt Grouping Principles Reliable Indicators Of Visual Preference?

    Get PDF
    Landscape visual preference research has indicated many potential indicators of preference; however a comprehensive framework concerning the relationship between visual preference and perception has not been solidified. Gestalt psychology, the predecessor to visual perception, proposes certain visual grouping tendencies to explain how humans perceive the world. This study examines if Gestalt grouping principles are reliable indicators of preference, and if they may be used to develop a broad context for visual assessment. Visual preference for 36 landscape scenes testing the proximity and similarity of landscape elements were ranked one through five by 1,749 Mississippi State University undergraduate, graduate, and faculty members in a web-based preference survey. Using a two-way between groups analysis of variance (ANOVA) to analyze responses, the results indicate that the proximal and similar configuration of landscape elements within a scene does significantly affect visual preference

    On the evaluation of background subtraction algorithms without ground-truth

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. C. San Miguel, and J. M. Martínez, "On the evaluation of background subtraction algorithms without ground-truth" in 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance, 2013, 180 - 187In video-surveillance systems, the moving object segmentation stage (commonly based on background subtraction) has to deal with several issues like noise, shadows and multimodal backgrounds. Hence, its failure is inevitable and its automatic evaluation is a desirable requirement for online analysis. In this paper, we propose a hierarchy of existing performance measures not-based on ground-truth for video object segmentation. Then, four measures based on color and motion are selected and examined in detail with different segmentation algorithms and standard test sequences for video object segmentation. Experimental results show that color-based measures perform better than motion-based measures and background multimodality heavily reduces the accuracy of all obtained evaluation results.This work is partially supported by the Spanish Government (TEC2007- 65400 SemanticVideo), by Cátedra Infoglobal-UAM for “Nuevas Tecnologías de video aplicadas a la seguridad”, by the Consejería de Educación of the Comunidad de Madrid and by the European Social Fund
    corecore