134 research outputs found

    BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction

    Full text link
    We propose BareSkinNet, a novel method that simultaneously removes makeup and lighting influences from the face image. Our method leverages a 3D morphable model and does not require a reference clean face image or a specified light condition. By combining the process of 3D face reconstruction, we can easily obtain 3D geometry and coarse 3D textures. Using this information, we can infer normalized 3D face texture maps (diffuse, normal, roughness, and specular) by an image-translation network. Consequently, reconstructed 3D face textures without undesirable information will significantly benefit subsequent processes, such as re-lighting or re-makeup. In experiments, we show that BareSkinNet outperforms state-of-the-art makeup removal methods. In addition, our method is remarkably helpful in removing makeup to generate consistent high-fidelity texture maps, which makes it extendable to many realistic face generation applications. It can also automatically build graphic assets of face makeup images before and after with corresponding 3D data. This will assist artists in accelerating their work, such as 3D makeup avatar creation.Comment: accepted at PG202

    BlendFace: Re-designing Identity Encoders for Face-Swapping

    Full text link
    The great advancements of generative adversarial networks and face recognition models in computer vision have made it possible to swap identities on images from single sources. Although a lot of studies seems to have proposed almost satisfactory solutions, we notice previous methods still suffer from an identity-attribute entanglement that causes undesired attributes swapping because widely used identity encoders, eg, ArcFace, have some crucial attribute biases owing to their pretraining on face recognition tasks. To address this issue, we design BlendFace, a novel identity encoder for face-swapping. The key idea behind BlendFace is training face recognition models on blended images whose attributes are replaced with those of another mitigates inter-personal biases such as hairsyles. BlendFace feeds disentangled identity features into generators and guides generators properly as an identity loss function. Extensive experiments demonstrate that BlendFace improves the identity-attribute disentanglement in face-swapping models, maintaining a comparable quantitative performance to previous methods.Comment: ICCV2023. Code: https://github.com/mapooon/BlendFace, Webpage: https://mapooon.github.io/BlendFacePage

    Visual SLAM algorithms: a survey from 2010 to 2016

    Get PDF
    SLAM is an abbreviation for simultaneous localization and mapping, which is a technique for estimating sensor motion and reconstructing structure in an unknown environment. Especially, Simultaneous Localization and Mapping (SLAM) using cameras is referred to as visual SLAM (vSLAM) because it is based on visual information only. vSLAM can be used as a fundamental technology for various types of applications and has been discussed in the field of computer vision, augmented reality, and robotics in the literature. This paper aims to categorize and summarize recent vSLAM algorithms proposed in different research communities from both technical and historical points of views. Especially, we focus on vSLAM algorithms proposed mainly from 2010 to 2016 because major advance occurred in that period. The technical categories are summarized as follows: feature-based, direct, and RGB-D camera-based approaches

    Handheld Guides in Inspection Tasks : Augmented Reality versus Picture

    Get PDF
    Inspection tasks focus on observation of the environment and are required in many industrial domains. Inspectors usually execute these tasks by using a guide such as a paper manual, and directly observing the environment. The effort required to match the information in a guide with the information in an environment and the constant gaze shifts required between the two can severely lower the work efficiency of inspector in performing his/her tasks. Augmented reality (AR) allows the information in a guide to be overlaid directly on an environment. This can decrease the amount of effort required for information matching, thus increasing work efficiency. AR guides on head-mounted displays (HMDs) have been shown to increase efficiency. Handheld AR (HAR) is not as efficient as HMD-AR in terms of manipulability, but is more practical and features better information input and sharing capabilities. In this study, we compared two handheld guides: an AR interface that shows 3D registered annotations, that is, annotations having a fixed 3D position in the AR environment, and a non-AR picture interface that displays non-registered annotations on static images. We focused on inspection tasks that involve high information density and require the user to move, as well as to perform several viewpoint alignments. The results of our comparative evaluation showed that use of the AR interface resulted in lower task completion times, fewer errors, fewer gaze shifts, and a lower subjective workload. We are the first to present findings of a comparative study of an HAR and a picture interface when used in tasks that require the user to move and execute viewpoint alignments, focusing only on direct observation. Our findings can be useful for AR practitioners and psychology researchers

    Tumor Marker Levels Before and After Curative Treatment of Hepatocellular Carcinoma as Predictors of Patient Survival.

    Get PDF
    BACKGROUND: α-fetoprotein (AFP) is used as a marker for hepatocellular carcinoma (HCC), which is influenced by hepatitis. Protein-induced vitamin K absence or antagonist II (PIVKA-II) is a sensitive diagnostic marker. Changes in these markers after treatment may reflect curability and predict outcome. METHODS: We conducted an analysis of prognosis in 470 HCC patients who received curative treatments, and examined the relationship between changes in AFP and PIVKA-II levels after 1 month of treatment in 156 patients. Subjects were divided into three groups according to changes in both levels: (1) normal (L) group before treatment, (2) normalization (N) or (3) decreased but still above normal level or unchanged (ANU) group after treatment. RESULTS: High AFP and PIVKA-II levels were significantly associated with poor tumor-free and overall survival. The presence of large size and advanced stage were significantly associated with prevalence of DU group. Overall survival in the AFP-L group was significantly better than that of other groups and overall survival in PIVKA-II-L and N groups were significantly better than that of the PIVKA-II-ANU groups. The combination of changes in the AFP- ANU and PIVKA-II- ANU groups showed the worst tumor-free and overall survivals. Multivariate analysis identified high pre-treatment levels of AFP and PIVKA-II and combination of AFP- ANU and PIVKA-II- ANU as significant determinants of poor tumor-free and overall survival, particularly in patients who underwent hepatectomy. CONCLUSION: We conclude that high levels of AFP or PIVKA-II after treatment for HCC did not sufficiently reflect curative efficacy of treatment and reflected a poor predictor of prognosis in HCC patients

    FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

    No full text

    FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

    Get PDF
    2015 IEEE International Conference on Image Processing (ICIP), 27-30 Sept. 2015,Quebec City, QC, CanadaIn this paper, we propose a method for handling focal length changes in the SLAM algorithm. Our method is designed as a pre-processing step to first estimate the change of the camera focal length, and then compensate for the zooming effects before running the actual SLAM algorithm. By using our method, camera zooming can be used in the existing SLAM algorithms with minor modifications. In the experiments, the effectiveness of the proposed method was quantitatively evaluated. The results indicate that the method can successfully deal with abrupt changes of the camera focal length

    Zoom Factor Compensation for Monocular SLAM

    No full text

    Zoom Factor Compensation for Monocular SLAM

    Get PDF
    2015 IEEE Virtual Reality (VR) Arles, FranceSLAM algorithms are widely used in augmented reality applications for registering virtual objects. Most SLAM algorithms estimate camera poses and 3D positions of feature points using known intrinsic camera parameters that are calibrated and fixed in advance. This assumption means that the algorithm does not allow changing the intrinsic camera parameters during runtime. We propose a method for handling focal length changes in the SLAM algorithm. Our method is designed as a pre-processing step for the SLAM algorithm input. In our method, the change of the focal length is estimated before the tracking process of the SLAM algorithm. Camera zooming effects in the input camera images are compensated for by using the estimated focal length change. By using our method, camera zooming can be used in the existing SLAM algorithms such as PTAM [4] with minor modifications. In the experiment, the effectiveness of the proposed method was quantitatively evaluated. The results indicate that the method can successfully deal with abrupt changes of the camera focal length
    corecore