807 research outputs found

    Age Progression/Regression by Conditional Adversarial Autoencoder

    Full text link
    "If I provide you a face image of mine (without telling you the actual age when I took the picture) and a large amount of face images that I crawled (containing labeled faces of different ages but not necessarily paired), can you show me what I would look like when I am 80 or what I was like when I was 5?" The answer is probably a "No." Most existing face aging works attempt to learn the transformation between age groups and thus would require the paired samples as well as the labeled query image. In this paper, we look at the problem from a generative modeling perspective such that no paired samples is required. In addition, given an unlabeled image, the generative model can directly produce the image with desired age attribute. We propose a conditional adversarial autoencoder (CAAE) that learns a face manifold, traversing on which smooth age progression and regression can be realized simultaneously. In CAAE, the face is first mapped to a latent vector through a convolutional encoder, and then the vector is projected to the face manifold conditional on age through a deconvolutional generator. The latent vector preserves personalized face features (i.e., personality) and the age condition controls progression vs. regression. Two adversarial networks are imposed on the encoder and generator, respectively, forcing to generate more photo-realistic faces. Experimental results demonstrate the appealing performance and flexibility of the proposed framework by comparing with the state-of-the-art and ground truth.Comment: Accepted by The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017

    r-BTN: Cross-domain Face Composite and Synthesis from Limited Facial Patches

    Full text link
    We start by asking an interesting yet challenging question, "If an eyewitness can only recall the eye features of the suspect, such that the forensic artist can only produce a sketch of the eyes (e.g., the top-left sketch shown in Fig. 1), can advanced computer vision techniques help generate the whole face image?" A more generalized question is that if a large proportion (e.g., more than 50%) of the face/sketch is missing, can a realistic whole face sketch/image still be estimated. Existing face completion and generation methods either do not conduct domain transfer learning or can not handle large missing area. For example, the inpainting approach tends to blur the generated region when the missing area is large (i.e., more than 50%). In this paper, we exploit the potential of deep learning networks in filling large missing region (e.g., as high as 95% missing) and generating realistic faces with high-fidelity in cross domains. We propose the recursive generation by bidirectional transformation networks (r-BTN) that recursively generates a whole face/sketch from a small sketch/face patch. The large missing area and the cross domain challenge make it difficult to generate satisfactory results using a unidirectional cross-domain learning structure. On the other hand, a forward and backward bidirectional learning between the face and sketch domains would enable recursive estimation of the missing region in an incremental manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial constraint to encourage the generation of realistic faces/sketches. Extensive experiments have been conducted to demonstrate the superior performance from r-BTN as compared to existing potential solutions.Comment: Accepted by AAAI 201

    Expression and Promoter Analysis of Six Heat Stress-Inducible Genes in Rice

    Get PDF
    During the long evolutionary process, plant gradually formed a series of strategies and mechanisms to cope with stress environment such as drought, heat, cold, and high salinity. Six highly heat responsive genes were identified in rice by microarray data analysis. The qRT-PCR analysis confirmed that the expression of these six genes were highly heat inducible and moderately responded to salt stress, polyethylene glycol, and abscisic acid treatment, but little affected by cold treatment. Promoters of the three highly heat-inducible genes (OsHsfB2cp, PM19p, and Hsp90p) were used to drive GUS gene expression in rice. The results of the GUS gene expression, histochemical staining, and GUS activities in panicles and flag leaves of the transgenic rice plants confirmed high heat-induced GUS activities and moderate drought-induced activities. The three promoters exhibited similar high activity lever in rice leaf under heat, but OsHsfB2cp and PM19p showed much higher activities in panicles under heat stress. Our work confirmed that the OsHsfB2c and PM19 promoters were highly heat inducible and further characterization and reconstruction of cis-elements in their promoters could lead to the development of highly effective heat-inducible promoters for plant genetic engineering

    Noninvasive Two-Dimensional Strain Imaging of Atherosclerosis: A Preliminary Study in Carotid Arteries In Vivo

    Get PDF
    AbstractAtherosclerosis remains a major cause of mortality all over the world and the sudden rupture of atherosclerotic plaque is the most important assassin. Vascular ultrasound elastography has shown promise in estimating the elastic properties to evaluate the plaque vulnerability. Contrary to intravascular elastography, noninvasive applications use a transcutaneous ultrasound transducer that is inexpensive, re-useable and convenient. To estimate the strain map, we employ a cross-correlation method in complex field to extract both the magnitude and phase messages of the ultrasound RF-echo signal. Two-dimension noninvasive carotid elastography was studied in atherosclerotic rats and New Zealand Rabbits and also in healthy volunteer, and the results indicate huge potential for diagnosis of the vulnerability of atheromatous plaques
    • …
    corecore