141 research outputs found

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    No full text
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method

    Learning to Reconstruct People in Clothing from a Single RGB Camera

    No full text
    We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach

    A flexible and versatile studio for synchronized multi-view video recording

    Get PDF
    In recent years, the convergence of Computer Vision and Computer Graphics has put forth new research areas that work on scene reconstruction from and analysis of multi-view video footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint in real-time from a set of real multi-view input video streams. The analysis of real-world scenes from multi-view video to extract motion information or reflection models is another field of research that greatly benefits from high-quality input data. Building a recording setup for multi-view video involves a great effort on the hardware as well as the software side. The amount of image data to be processed is huge, a decent lighting and camera setup is essential for a naturalistic scene appearance and robust background subtraction, and the computing infrastructure has to enable real-time processing of the recorded material. This paper describes the recording setup for multi-view video acquisition that enables the synchronized recording of dynamic scenes from multiple camera positions under controlled conditions. The requirements to the room and their implementation in the separate components of the studio are described in detail. The efficiency and flexibility of the room is demonstrated on the basis of the results that we obtain with a real-time 3D scene reconstruction system, a system for non-intrusive optical motion capture and a model-based free-viewpoint video system for human actors.

    Fast Non-Rigid Radiance Fields from Monocularized Data

    Get PDF
    3D reconstruction and novel view synthesis of dynamic scenes from collectionsof single views recently gained increased attention. Existing work showsimpressive results for synthetic setups and forward-facing real-world data, butis severely limited in the training speed and angular range for generatingnovel views. This paper addresses these limitations and proposes a new methodfor full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At thecore of our method are: 1) An efficient deformation module that decouples theprocessing of spatial and temporal information for acceleration at training andinference time; and 2) A static module representing the canonical scene as afast hash-encoded neural radiance field. We evaluate the proposed approach onthe established synthetic D-NeRF benchmark, that enables efficientreconstruction from a single monocular view per time-frame randomly sampledfrom a full hemisphere. We refer to this form of inputs as monocularized data.To prove its practicality for real-world scenarios, we recorded twelvechallenging sequences with human actors by sampling single frames from asynchronized multi-view rig. In both cases, our method is trained significantlyfaster than previous methods (minutes instead of days) while achieving highervisual accuracy for generated novel views. Our source code and data isavailable at our project pagehttps://graphics.tu-bs.de/publications/kappel2022fast.<br

    TINJAUAN SEJARAH K.H. MOH. BAQIR ADELAN DALAM MENGEMBANGKAN ENTERPRENEURSHIP DI PONDOK PESANTREN TARBIYATUT THOLABAH KRANJI PACIRAN LAMONGAN TAHUN 1958-1990

    Get PDF
    Masalah yang diteliti dalam penulisan skripsi ini adalah (1) Bagaimana biografi K.H. Moh. Baqir Adelan? (2) Bagaimana profil Pondok Pesantren Tarbiyatut Tholabah yang dipimpin oleh K.H. Moh. Baqir Adelan? (3) Bagaimana usaha K.H. Moh. Baqir Adelan dalam mengembangkan enterpreneurship di Pondok Pesantren Tarbiyatut Tholabah tahun 1958-1990? Untuk menjawab permasalahan tersebut, penulis menggunakan metode penelitian sejarah, yang terdiri dari beberapa tahapan yaitu (1) heuristik adalah pengumpulan data yang terdiri dari sumber benda maupun lisan serta sumber buku-buku yang berkaitan dengan penelitian ini. (2) kritik. (3) interpretasi. (4) historiografi. Adapun pendekatan yang digunakan yaitu pendekatan historis yang mendiskripsikan peristiwa yang terjadi pada masa lampau. Dalam hal ini peneliti menggunakan teori sejarah naratif, yang dibawakan oleh K.H. Moh. Baqir Adelan seorang pelaku dalam panggung sandiwara dan teori continuity and change yang dikutip oleh Zamakhsyari Dhofier. Dari penelitian ini dapat disimpulkan bahwa (1) K.H. Moh. Baqir Adelan lahir pada tanggal 30 Agustus 1934. Beliau menuntut ilmu pertama kali di Madrasah Salafi Tarbiyatut Tholabah dan melanjutkan ke Pondok Al-Amin Tunggul yang dioleh K.H. Amin Musthofa, Paman beliau sendiri. setelah itu selama enam tahun beliau pergi ke Jombang untuk mondok di Tambakberas selama 2 tahun dan Denanyar selama 4 tahun. pada tahun 1958 beliau kembali ke pondok. Beliau menjadi pengasuh Pondok Pesantre Tarbiyatut Tholabah pada tahun 1976. Beliau wafat pada usia 72 tahun yang bertepatan dengan tanggal 15 Mei 2006. (2) pondok pesantren Tarbiyatut Tholabah didirikan tahun 1898 M. di desa Kranji Paciran Lamongan. pondok Tarbiyatut Tholabah didirikan oleh K.H. Musthofa. Dulu pondok tersebut hanya terdapat berupa asrama dan masjid saja, namun dengan perkembangan zaman yang menuntut akan adanya lembaga formal untuk memenuhi intruksi dari kementrian pendidikan. Sekarang, pondok tersebut mempunyai lembaga formal diantaranya adalah: MI, MTs, MA dan STAIDRA. (3) selain seorang kiai, K.H. Moh. Baqir Adelan adalah seorang yang mempunyai jiwa enterpreneurship. Hal ini dibuktikan dengan usaha beliau dalam mendirikan UD. Barokah Sejati. Bukan hanya itu, sebelum mempunyai usaha meubel (UD. Barokah Sejati) beliau sudah mempunyai usaha dalam bidang penyedia kitab-kitab yang dibutuhkan oleh lembaga ma’arif di daerah Paciran

    A framework for natural animation of digitized models

    No full text
    We present a novel versatile, fast and simple framework to generate highquality animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data

    Depth Augmented Omnidirectional Stereo for 6-DoF VR Photography

    Get PDF

    Enhanced dynamic reflectometry for relightable free-viewpoint video

    No full text
    Free-Viewpoint Video of Human Actors allows photo- realistic rendering of real-world people under novel viewing conditions. Dynamic Reflectometry extends the concept of free-view point video and allows rendering in addition under novel lighting conditions. In this work, we present an enhanced method for capturing human shape and motion as well as dynamic surface reflectance properties from a sparse set of input video streams. We augment our initial method for model-based relightable free-viewpoint video in several ways. Firstly, a single-skin mesh is introduced for the continuous appearance of the model. Moreover an algorithm to detect and compensate lateral shifting of textiles in order to improve temporal texture registration is presented. Finally, a structured resampling approach is introduced which enables reliable estimation of spatially varying surface reflectance despite a static recording setup. The new algorithm ingredients along with the Relightable 3D Video framework enables us to realistically reproduce the appearance of animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution can be used to create 3D renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware

    An X-ray, Optical and Radio Search for Supernova Remnants in the Nearby Sculptor Group Sd Galaxy NGC 7793

    Get PDF
    We present a multi-wavelength study of the properties of supernova remnants (SNRs) in the nearby Sculptor Group Sd galaxy NGC 7793. Using our own Very Large Array radio observations at 6 and 20 cm, as well as archived ROSAT X-ray data, previously published optical results and our own H-alpha image, we have searched for X-ray and radio counterparts to previously known optically-identified SNRs and for new previously unidentified SNRs at these two wavelength regimes. Only two of the 28 optically-identified SNRs are detected at another wavelength. The most noteworthy source in our study is N7793-S26, which is the only SNR that is detected at all three wavelengths. It features a long (approximately 450 pc) filamentary morphology that is clearly seen in both the optical and radio images. N7793-S26's radio luminosity exceeds that of the Galactic SNR Cas A, and based on equipartition calculations we determine that an energy of at least 10^52 ergs is required to maintain this source. A second optically identified SNR, N7793-S11, has detectable radio emission but is not detected in the X-ray. Complementary X-ray and radio searches for SNRs have yielded five new candidate radio SNRs, to be added to the 28 SNRs in this galaxy that have already been detected by optical methods. We find that the density of the ambient interstellar medium (ISM) surrounding these SNRs significantly impacts the spectral characteristics of SNRs in this galaxy, consistent with surveys of the SNR populations in other galaxies.Comment: 32 pages, 25 figures, to appear in the Astrophysical Journal (February 2002
    corecore