3,912 research outputs found

    Image-Based View Synthesis

    Get PDF
    We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("view extrapolation"), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of "trilinear tensors" that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images --- for the purpose of cheap and fast viewers that can run on standard hardware

    A jigsaw puzzle framework for homogenization of high porosity foams

    Get PDF
    An approach to homogenization of high porosity metallic foams is explored. The emphasis is on the \Alporas{} foam and its representation by means of two-dimensional wire-frame models. The guaranteed upper and lower bounds on the effective properties are derived by the first-order homogenization with the uniform and minimal kinematic boundary conditions at heart. This is combined with the method of Wang tilings to generate sufficiently large material samples along with their finite element discretization. The obtained results are compared to experimental and numerical data available in literature and the suitability of the two-dimensional setting itself is discussed.Comment: 11 pages, 7 figures, 3 table

    Image Based Rendering Using Algebraic Techniques

    Get PDF
    This paper presents an image-based rendering system using algebraic relations between different views of an object. The system uses pictures of an object taken from known positions. Given three such images it can generate "virtual'' ones as the object would look from any position near the ones that the two input images were taken from. The extrapolation from the example images can be up to about 60 degrees of rotation. The system is based on the trilinear constraints that bind any three view so fan object. As a side result, we propose two new methods for camera calibration. We developed and used one of them. We implemented the system and tested it on real images of objects and faces. We also show experimentally that even when only two images taken from unknown positions are given, the system can be used to render the object from other view points as long as we have a good estimate of the internal parameters of the camera used and we are able to find good correspondence between the example images. In addition, we present the relation between these algebraic constraints and a factorization method for shape and motion estimation. As a result we propose a method for motion estimation in the special case of orthographic projection
    • …
    corecore