25,027 research outputs found

    Interactions of cationic formulations on human hair: Effects on cuticle texture and cortex porosity

    Get PDF
    During the daily care as cleansing, bleaching, ironing and others the hair fiber becomes damaged due to alterations in superficial charges, cuticle breakage and the protein loss from cuticle and cortex (1,2). Cationic formulations are used to treat damaged hair fiber. Cuticle and cortex properties change by the adsorption and diffusion of formulation ingredients (1,2). The adsorption of cationic surfactants containing quaternary ammonium on hair surface has been majority attributed to electrostatic interactions between the negatively charged cuticle after shampooing and the positive charge of these compounds (1). The uniformity of the adsorbed layer has direct influence on damaged hair texture (3,4). Recent advances in computational image analyses and very low angle illumination on light microscope allow us to develop a new approach of hair texture evaluation (5). On the other hand, the diffusion of ingredients into cortex has been attributed mainly to size, concentration and affinity between the molecule and hair. Cationic surfactants play an important role in the diffusion process due to the deposition and charge-charge interactions on hair surface (1). The aim of this study was to obtain quantitative data on hair cuticle texture and hair cortex porosity for healthy and damaged hair. The images of surface hair (texture) obtained by light microscope were categorized using by (GTSDM) grey-tone spatial dependence matrices. Bleached hair samples were treated with different conditioning formulations applied as leave on and rinse off processes. After bleaching treatment, the hair texture is significantly different from the control healthy hair. When a cationic formulation and rinsing off process were used, the bleached hair texture improvement by lowering energy beyond the measured to control healthy hair. Cationic formulation using leave on process does not allow the same results, however still improves the texture condition of bleached hair. The quantitative data and images from cortex porosity were analyzed using X-ray micro–computed tomography (micro-CT) that is a fast-growing method in scientific research applications that allows the obtaining of non-destructive imaging of morphological structures (6,7). The influence of cationic compounds on cortex porosity after treatments was observed. We used two types of leave on formulation on bleached hair, the first formula containing cationic ingredients and the second formula without cationic. The formulation without cationic showed a significant reduction of cortex porosity, about 70% compared to the bleached hair only. Meanwhile another formulation, did not show the same performance. The data obtained indicate that the conditioner formulation can improve hair texture and decrease cortex porosity. The charge-charge interaction allows the adsorption or diffusion of the formulation ingredients into the hair. References: 1_Schueller, R and Romanowski. Conditioning Agents for Hair and Skin. Cosmetic Science and Technology vol 21, 1999. 2_Johnson, D.H. Hair and Hair Care. Cosmetic Science and Technology vol 17, 1997. 3_ Aita, Y. and Nonomura, Y. J. Oleo Sci. 65, (6) 493-498, 2016. 4_La Torre, Carmem. J.Cosmet.Sci., 57, 37-56 (January/February), 2006. 5_ Scanavez, C and Arcuri, H. Proceedings of the 20th International Hair-Science Symposium, Dresden, Germany, September 2017. 6_Anton du Plessis et al. Laboratory x-ray micro-computed tomography: a user guideline for biological samples, GigaScience, 6 1–11, 2017. 7-Santini, M. Journal of Physics: Conference Series, 655, 2015

    Recovering Faces from Portraits with Auxiliary Facial Attributes

    Full text link
    Recovering a photorealistic face from an artistic portrait is a challenging task since crucial facial details are often distorted or completely lost in artistic compositions. To handle this loss, we propose an Attribute-guided Face Recovery from Portraits (AFRP) that utilizes a Face Recovery Network (FRN) and a Discriminative Network (DN). FRN consists of an autoencoder with residual block-embedded skip-connections and incorporates facial attribute vectors into the feature maps of input portraits at the bottleneck of the autoencoder. DN has multiple convolutional and fully-connected layers, and its role is to enforce FRN to generate authentic face images with corresponding facial attributes dictated by the input attribute vectors. %Leveraging on the spatial transformer networks, FRN automatically compensates for misalignments of portraits. % and generates aligned face images. For the preservation of identities, we impose the recovered and ground-truth faces to share similar visual features. Specifically, DN determines whether the recovered image looks like a real face and checks if the facial attributes extracted from the recovered image are consistent with given attributes. %Our method can recover high-quality photorealistic faces from unaligned portraits while preserving the identity of the face images as well as it can reconstruct a photorealistic face image with a desired set of attributes. Our method can recover photorealistic identity-preserving faces with desired attributes from unseen stylized portraits, artistic paintings, and hand-drawn sketches. On large-scale synthesized and sketch datasets, we demonstrate that our face recovery method achieves state-of-the-art results.Comment: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV

    Universal in vivo Textural Model for Human Skin based on Optical Coherence Tomograms

    Full text link
    Currently, diagnosis of skin diseases is based primarily on visual pattern recognition skills and expertise of the physician observing the lesion. Even though dermatologists are trained to recognize patterns of morphology, it is still a subjective visual assessment. Tools for automated pattern recognition can provide objective information to support clinical decision-making. Noninvasive skin imaging techniques provide complementary information to the clinician. In recent years, optical coherence tomography has become a powerful skin imaging technique. According to specific functional needs, skin architecture varies across different parts of the body, as do the textural characteristics in OCT images. There is, therefore, a critical need to systematically analyze OCT images from different body sites, to identify their significant qualitative and quantitative differences. Sixty-three optical and textural features extracted from OCT images of healthy and diseased skin are analyzed and in conjunction with decision-theoretic approaches used to create computational models of the diseases. We demonstrate that these models provide objective information to the clinician to assist in the diagnosis of abnormalities of cutaneous microstructure, and hence, aid in the determination of treatment. Specifically, we demonstrate the performance of this methodology on differentiating basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) from healthy tissue

    Unsupervised Person Image Synthesis in Arbitrary Poses

    Get PDF
    We present a novel approach for synthesizing photo-realistic images of people in arbitrary poses using generative adversarial learning. Given an input image of a person and a desired pose represented by a 2D skeleton, our model renders the image of the same person under the new pose, synthesizing novel views of the parts visible in the input image and hallucinating those that are not seen. This problem has recently been addressed in a supervised manner, i.e., during training the ground truth images under the new poses are given to the network. We go beyond these approaches by proposing a fully unsupervised strategy. We tackle this challenging scenario by splitting the problem into two principal subtasks. First, we consider a pose conditioned bidirectional generator that maps back the initially rendered image to the original pose, hence being directly comparable to the input image without the need to resort to any training image. Second, we devise a novel loss function that incorporates content and style terms, and aims at producing images of high perceptual quality. Extensive experiments conducted on the DeepFashion dataset demonstrate that the images rendered by our model are very close in appearance to those obtained by fully supervised approaches.Comment: Accepted as Spotlight at CVPR 201

    Unsupervised person image synthesis in arbitrary poses

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksWe present a novel approach for synthesizing photo-realistic images of people in arbitrary poses using generative adversarial learning. Given an input image of a person and a desired pose represented by a 2D skeleton, our model renders the image of the same person under the new pose, synthesizing novel views of the parts visible in the input image and hallucinating those that are not seen. This problem has recently been addressed in a supervised manner, i.e., during training the ground truth images under the new poses are given to the network. We go beyond these approaches by proposing a fully unsupervised strategy. We tackle this challenging scenario by splitting the problem into two principal subtasks. First, we consider a pose conditioned bidirectional generator that maps back the initially rendered image to the original pose, hence being directly comparable to the input image without the need to resort to any training image. Second, we devise a novel loss function that incorporates content and style terms, and aims at producing images of high perceptual quality. Extensive experiments conducted on the DeepFashion dataset demonstrate that the images rendered by our model are very close in appearance to those obtained by fully supervised approaches.Peer ReviewedPostprint (author's final draft

    MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

    Get PDF
    In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is our new differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world data feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation.Comment: International Conference on Computer Vision (ICCV) 2017 (Oral), 13 page
    • …
    corecore