43 research outputs found
GANFIT: Generative adversarial network fitting for high fidelity 3D face reconstruction
In the past few years, a lot of work has been done to- wards reconstructing the 3D facial structure from single images by capitalizing on the power of Deep Convolutional Neural Networks (DCNNs). In the most recent works, differentiable renderers were employed in order to learn the relationship between the facial identity features and the parameters of a 3D morphable model for shape and texture. The texture features either correspond to components of a linear texture space or are learned by auto-encoders directly from in-the-wild images. In all cases, the quality of the facial texture reconstruction of the state-of-the-art methods is still not capable of modeling textures in high fidelity. In this paper, we take a radically different approach and harness the power of Generative Adversarial Networks (GANs) and DCNNs in order to reconstruct the facial texture and shape from single images. That is, we utilize GANs to train a very powerful generator of facial texture in UV space. Then, we revisit the original 3D Morphable Models (3DMMs) fitting approaches making use of non-linear optimization to find the optimal latent parameters that best reconstruct the test image but under a new perspective. We optimize the parameters with the supervision of pretrained deep identity features through our end-to-end differentiable framework. We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, to the best of our knowledge, facial texture reconstruction with high-frequency details
Test of 6-kVA three-phase flux transfer-type current-limiting transformer
A 6-kVA three-phase model of the flux transfer-type current-limiting transformer was developed and tested. In this device, the winding loops of YBCO superconducting tapes couple magnetically two independent iron cores: the primary-side iron core and the secondary-side iron core. The former and the latter are equipped with copper primary and secondary windings, respectively. Because the magnetic fluxes linked to the superconducting winding loops must be kept constant, the magnetic flux is transferred by the superconducting YBCO loops between the two iron cores in order to couple magnetically the primary and secondary coils. While the YBCO loops are superconducting, 100% of the magnetic flux is transferred and the device shows the similar function as usual transformers. Once the YBCO loops become normal by a fault current in any of the windings, the power transfer between two iron cores is limited and the current in the secondary winding is limited naturally on a result of decoupling the iron cores
Organic acids, sugars, vitamin C, antioxidant capacity and phenolic compounds in fruits of white (<i>Morus alba</i> L.) and black (<i>Morus nigra</i> L.) mulberry genotypes
Mulberries (Morus spp) are historically grown in particular microclimatic regions in Eastern Anatolia, including Aras valley. In the valley, mulberries are one of the ancient crop and used for several purposes by local people. The aim of the present study was to first time evaluate organic acids, sugars, vitamin C, antioxidant capacity (TEAC assay, Trolox Equivalent Antioxidant Capacity). and phenolic compounds of the historical black and white mulberry genotypes growing Aras valley in Turkey. Results showed that, species and genotypes strongly influenced the chemical content and antioxidant capacity (p<0.05). Malic acid was the main organic acid in all genotypes and ranged from 1.130 to 3.040 g/100 g. Among sugars, fructose and glucose are predominant and were between 4.177 and 7.700 g/100g and 5.337 and 8.573 g/100g in all mulberry genotypes, respectively. The black mulberry genotypes showed remarkably higher antioxidant capacity determined by TEAC assay (10.167 to 14.400 µ mol TE/g) compared to white mulberry genotypes (6.170 to 9.273 µmol TE/g). Chlorogenic acid and rutin was the main phenolic compound
Recommended from our members
Organic acids, sugars, vitamin C, antioxidant capacity, and phenolic compounds in fruits of white (Morus alba L.) and black (Morus nigra L.) mulberry genotypes
Mulberries (Morus spp) are historically grown in particular microclimatic regions in Eastern Anatolia, including Aras valley. In the valley, mulberries are one of the ancient crop and used for several purposes by local people. The aim of the present study was to first time evaluate organic acids, sugars, vitamin C, antioxidant capacity (TEAC assay, Trolox Equivalent Antioxidant Capacity). and phenolic compounds of the historical black and white mulberry genotypes growing Aras valley in Turkey. Results showed that, species and genotypes strongly influenced the chemical content and antioxidant capacity (p<0.05). Malic acid was the main organic acid in all genotypes and ranged from 1.130 to 3.040 g/100 g. Among sugars, fructose and glucose are predominant and were between 4.177 and 7.700 g/100g and 5.337 and 8.573 g/100g in all mulberry genotypes, respectively. The black mulberry genotypes showed remarkably higher antioxidant capacity determined by TEAC assay (10.167 to 14.400 µ mol TE/g) compared to white mulberry genotypes (6.170 to 9.273 µmol TE/g). Chlorogenic acid and rutin was the main phenolic compound.This is the publisher’s final pdf. The published article is copyrighted by the author(s) and published by Julius Kühn-Institut - Bundesforschungsinstitut für Kulturpflanzen. The published article can be found at: http://pub.jki.bund.de/index.php/JABFQ/indexKeywords: bioactive content, genotypic effect, mulberr
Synthesizing Coupled 3D Face Modalities by Trunk-Branch Generative Adversarial Networks
Generating realistic 3D faces is of high importance for computer graphics and
computer vision applications. Generally, research on 3D face generation
revolves around linear statistical models of the facial surface. Nevertheless,
these models cannot represent faithfully either the facial texture or the
normals of the face, which are very crucial for photo-realistic face synthesis.
Recently, it was demonstrated that Generative Adversarial Networks (GANs) can
be used for generating high-quality textures of faces. Nevertheless, the
generation process either omits the geometry and normals, or independent
processes are used to produce 3D shape information. In this paper, we present
the first methodology that generates high-quality texture, shape, and normals
jointly, which can be used for photo-realistic synthesis. To do so, we propose
a novel GAN that can generate data from different modalities while exploiting
their correlations. Furthermore, we demonstrate how we can condition the
generation on the expression and create faces with various facial expressions.
The qualitative results shown in this paper are compressed due to size
limitations, full-resolution results and the accompanying video can be found in
the supplementary documents. The code and models are available at the project
page: https://github.com/barisgecer/TBGAN.Comment: Check project page: https://github.com/barisgecer/TBGAN for the full
resolution results and the accompanying vide
OSTeC: one-shot texture completion
The last few years have witnessed the great success of non-linear generative models in synthesizing high-quality photorealistic face images. Many recent 3D facial texture reconstruction and pose manipulation from a single image approaches still rely on large and clean face datasets to train image-to-image Generative Adversarial Networks (GANs). Yet the collection of such a large scale high-resolution 3D texture dataset is still very costly and difficult to maintain age/ethnicity balance. Moreover, regression-based approaches suffer from generalization to the in-the-wild conditions and are unable to fine-tune to a target-image. In this work, we propose an unsupervised approach for one-shot 3D facial texture completion that does not re-quire large-scale texture datasets, but rather harnesses the knowledge stored in 2D face generators. The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator, based on the visible parts. Finally, we stitch the most visible textures at different angles in the UV image-plane. Further, we frontalize the target image by projecting the completed texture into the generator. The qualitative and quantitative experiments demonstrate that the completed UV textures and frontalized images are of high quality, resembles the original identity, can be used to train a texture GAN model for 3DMM fitting and improve pose-invariant face recognition.
AvatarMe: realistically renderable 3D facial reconstruction "in-the-wild"
Over the last years, with the advent of Generative Adversarial Networks (GANs), many face analysis tasks have accomplished astounding performance, with applications including, but not limited to, face generation and 3D face reconstruction from a single "in-the-wild" image. Nevertheless, to the best of our knowledge, there is no method which can produce high-resolution photorealistic 3D faces from "in-the-wild" images and this can be attributed to the: (a) scarcity of available data for training, and (b) lack of robust methodologies that can successfully be applied on very high-resolution data. In this paper, we introduce AvatarMe, the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail. To achieve this, we capture a large dataset of facial shape and reflectance and build on a state-of-the-art 3D texture and shape reconstruction method and successively refine its results, while generating the per-pixel diffuse and specular components that are required for realistic rendering. As we demonstrate in a series of qualitative and quantitative experiments, AvatarMe outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image that, for the first time, bridges the uncanny valley
GANFIT: generative adversarial network fitting for high fidelity 3D face reconstruction
In the past few years a lot of work has been done towards reconstructing the 3D facial structure from single images by capitalizing on the power of Deep Convolutional Neural Networks (DCNNs). In the most recent works, differentiable renderers were employed in order to learn the relationship between the facial identity features and the parameters of a 3D morphable model for shape and texture. The texture features either correspond to components of a linear texture space or are learned by auto-encoders directly from in-the-wild images. In all cases, the quality of the facial texture reconstruction of the state-of-the-art methods is still not capable of modelling textures in high fidelity. In this paper, we take a radically different approach and harness the power of Generative Adversarial Networks (GANs) and DCNNs in order to reconstruct the facial texture and shape from single images. That is, we utilize GANs to train a very powerful generator of facial texture in UV space. Then, we revisit the original 3D Morphable Models (3DMMs) fitting approaches making use of non-linear optimization to find the optimal latent parameters that best reconstruct the test image but under a new perspective. We optimize the parameters with the supervision of pretrained deep identity features through our end-to-end differentiable framework. We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, to the best of our knowledge, facial texture reconstruction with high-frequency details.