381 research outputs found

    Leukoencephalopathy upon disruption of the chloride channel ClC-2

    Get PDF
    ClC-2 is a broadly expressed plasma membrane chloride channel that is modulated by voltage, cell swelling, and pH. A human mutation leading to a heterozygous loss of ClC-2 has previously been reported to be associated with epilepsy, whereas the disruption of Clcn2 in mice led to testicular and retinal degeneration. We now show that the white matter of the brain and spinal cord of ClC-2 knock-out mice developed widespread vacuolation that progressed with age. Fluid-filled spaces appeared between myelin sheaths of the central but not the peripheral nervous system. Neuronal morphology, in contrast, seemed normal. Except for the previously reported blindness, neurological deficits were mild and included a decreased conduction velocity in neurons of the central auditory pathway. The heterozygous loss of ClC-2 had no detectable functional or morphological consequences. Neither heterozygous nor homozygous ClC-2 knock-out mice had lowered seizure thresholds. Sequencing of a large collection of human DNA and electrophysiological analysis showed that several ClC-2 sequence abnormalities previously found in patients with epilepsy most likely represent innocuous polymorphisms

    Efficient illumination independent appearance-based face tracking

    Get PDF
    One of the major challenges that visual tracking algorithms face nowadays is being able to cope with changes in the appearance of the target during tracking. Linear subspace models have been extensively studied and are possibly the most popular way of modelling target appearance. We introduce a linear subspace representation in which the appearance of a face is represented by the addition of two approxi- mately independent linear subspaces modelling facial expressions and illumination respectively. This model is more compact than previous bilinear or multilinear ap- proaches. The independence assumption notably simplifies system training. We only require two image sequences. One facial expression is subject to all possible illumina- tions in one sequence and the face adopts all facial expressions under one particular illumination in the other. This simple model enables us to train the system with no manual intervention. We also revisit the problem of efficiently fitting a linear subspace-based model to a target image and introduce an additive procedure for solving this problem. We prove that Matthews and Baker’s Inverse Compositional Approach makes a smoothness assumption on the subspace basis that is equiva- lent to Hager and Belhumeur’s, which worsens convergence. Our approach differs from Hager and Belhumeur’s additive and Matthews and Baker’s compositional ap- proaches in that we make no smoothness assumptions on the subspace basis. In the experiments conducted we show that the model introduced accurately represents the appearance variations caused by illumination changes and facial expressions. We also verify experimentally that our fitting procedure is more accurate and has better convergence rate than the other related approaches, albeit at the expense of a slight increase in computational cost. Our approach can be used for tracking a human face at standard video frame rates on an average personal computer

    Fitting a 3D Morphable Model to Edges: A Comparison Between Hard and Soft Correspondences

    Get PDF
    We propose a fully automatic method for fitting a 3D morphable model to single face images in arbitrary pose and lighting. Our approach relies on geometric features (edges and landmarks) and, inspired by the iterated closest point algorithm, is based on computing hard correspondences between model vertices and edge pixels. We demonstrate that this is superior to previous work that uses soft correspondences to form an edge-derived cost surface that is minimised by nonlinear optimisation.Comment: To appear in ACCV 2016 Workshop on Facial Informatic

    Reflectance from images: a model-based approach for human faces

    No full text
    In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape, and establishes point-to-point correspondence across images taken from different viewpoints, and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a-priori. We apply analytical BRDF models to express the reflectance properties of each region, and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novelorientations and lighting conditions

    Surface and sub-surface multi-proxy reconstruction of middle to late Holocene palaeoceanographic changes in Disko Bugt, West Greenland

    Get PDF
    We present new surface water proxy records of meltwater production (alkenone derived), relative sea surface temperature (diatom, alkenones) and sea ice (diatoms) changes from the Disko Bugt area off central West Greenland. We combine these new surface water reconstructions with published proxy records (benthic foraminifera - bottom water proxy; dinocyst assemblages – surface water proxy), along with atmospheric temperature from Greenland ice core and Greenland lake records. This multi-proxy approach allows us to reconstruct centennial scale middle to late Holocene palaeoenvironmental evolution of Disko Bugt and the Western Greenland coastal region with more detail than previously available. Combining surface and bottom water proxies identifies the coupling between ocean circulation (West Greenland Current conditions), the atmosphere and the Greenland Ice Sheet. Centennial to millennial scale changes in the wider North Atlantic region were accompanied by variations in the West Greenland Current (WGC). During periods of relatively warm WGC, increased surface air temperature over western Greenland led to ice sheet retreat and significant meltwater flux. In contrast, during periods of cold WGC, atmospheric cooling resulted in glacier advances. We also identify potential linkages between the palaeoceanography of the Disko Bugt region and key changes in the history of human occupation. Cooler oceanographic conditions at 3.5 ka BP support the view that the Saqqaq culture left Disko Bugt due to deteriorating climatic conditions. The cause of the disappearance of the Dorset culture is unclear, but the new data presented here indicate that it may be linked to a significant increase in meltwater flux, which caused cold and unstable coastal conditions at ca. 2 ka BP. The subsequent settlement of the Norse occurred at the same time as climatic amelioration during the Medieval Climate Anomaly and their disappearance may be related to harsher conditions at the beginning of the Little Ice Age

    Automatic 3D facial model and texture reconstruction from range scans

    Get PDF
    This paper presents a fully automatic approach to fitting a generic facial model to detailed range scans of human faces to reconstruct 3D facial models and textures with no manual intervention (such as specifying landmarks). A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registrations between the generic model and the range scans with different sizes. And then a new template-fitting method, formulated in an optmization framework of minimizing the physically based elastic energy derived from thin shells, faithfully reconstructs the surfaces and the textures from the range scans and yields dense point correspondences across the reconstructed facial models. Finally, we demonstrate a facial expression transfer method to clone facial expressions from the generic model onto the reconstructed facial models by using the deformation transfer technique

    {3D} Morphable Face Models -- Past, Present and Future

    No full text
    In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation, and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications

    FSNet: An Identity-Aware Generative Model for Image-based Face Swapping

    Full text link
    This paper presents FSNet, a deep generative model for image-based face swapping. Traditionally, face-swapping methods are based on three-dimensional morphable models (3DMMs), and facial textures are replaced between the estimated three-dimensional (3D) geometries in two images of different individuals. However, the estimation of 3D geometries along with different lighting conditions using 3DMMs is still a difficult task. We herein represent the face region with a latent variable that is assigned with the proposed deep neural network (DNN) instead of facial textures. The proposed DNN synthesizes a face-swapped image using the latent variable of the face region and another image of the non-face region. The proposed method is not required to fit to the 3DMM; additionally, it performs face swapping only by feeding two face images to the proposed network. Consequently, our DNN-based face swapping performs better than previous approaches for challenging inputs with different face orientations and lighting conditions. Through several experiments, we demonstrated that the proposed method performs face swapping in a more stable manner than the state-of-the-art method, and that its results are compatible with the method thereof.Comment: 20pages, Asian Conference of Computer Vision 201

    Enabling Viewpoint Learning through Dynamic Label Generation

    Get PDF
    Optimal viewpoint prediction is an essential task in many computer graphics applications. Unfortunately, common viewpoint qualities suffer from two major drawbacks: dependency on clean surface meshes, which are not always available, and the lack of closed-form expressions, which requires a costly search involving rendering. To overcome these limitations we propose to separate viewpoint selection from rendering through an end-to-end learning approach, whereby we reduce the influence of the mesh quality by predicting viewpoints from unstructured point clouds instead of polygonal meshes. While this makes our approach insensitive to the mesh discretization during evaluation, it only becomes possible when resolving label ambiguities that arise in this context. Therefore, we additionally propose to incorporate the label generation into the training procedure, making the label decision adaptive to the current network predictions. We show how our proposed approach allows for learning viewpoint predictions for models from different object categories and for different viewpoint qualities. Additionally, we show that prediction times are reduced from several minutes to a fraction of a second, as compared to state-of-the-art (SOTA) viewpoint quality evaluation. We will further release the code and training data, which will to our knowledge be the biggest viewpoint quality dataset available
    corecore