46 research outputs found

    Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual Reality

    Full text link
    Social presence, the feeling of being there with a real person, will fuel the next generation of communication systems driven by digital humans in virtual reality (VR). The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models. However, these PS models are time-consuming to build and are typically trained with limited data variability, which results in poor generalization and robustness. Major sources of variability that affects the accuracy of facial expression transfer algorithms include using different VR headsets (e.g., camera configuration, slop of the headset), facial appearance changes over time (e.g., beard, make-up), and environmental factors (e.g., lighting, backgrounds). This is a major drawback for the scalability of these models in VR. This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture (MIA) trained with specialized augmentation strategies. MIA drives the shape component of the avatar from three cameras in the VR headset (two eyes, one mouth), in untrained subjects, using minimal personalized information (i.e., neutral 3D mesh shape). Similarly, if the PS texture decoder is available, MIA is able to drive the full avatar (shape+texture) robustly outperforming PS models in challenging scenarios. Our key contribution to improve robustness and generalization, is that our method implicitly decouples, in an unsupervised manner, the facial expression from nuisance factors (e.g., headset, environment, facial appearance). We demonstrate the superior performance and robustness of the proposed method versus state-of-the-art PS approaches in a variety of experiments

    Multiface: A Dataset for Neural Face Rendering

    Full text link
    Photorealistic avatars of human faces have come a long way in recent years, yet research along this area is limited by a lack of publicly available, high-quality datasets covering both, dense multi-view camera captures, and rich facial expressions of the captured subjects. In this work, we present Multiface, a new multi-view, high-resolution human face dataset collected from 13 identities at Reality Labs Research for neural face rendering. We introduce Mugsy, a large scale multi-camera apparatus to capture high-resolution synchronized videos of a facial performance. The goal of Multiface is to close the gap in accessibility to high quality data in the academic community and to enable research in VR telepresence. Along with the release of the dataset, we conduct ablation studies on the influence of different model architectures toward the model's interpolation capacity of novel viewpoint and expressions. With a conditional VAE model serving as our baseline, we found that adding spatial bias, texture warp field, and residual connections improves performance on novel view synthesis. Our code and data is available at: https://github.com/facebookresearch/multifac

    LSST Science Book, Version 2.0

    Get PDF
    A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at http://www.lsst.org/lsst/sciboo

    The khmer software package: enabling efficient nucleotide sequence analysis [version 1; referees: 2 approved, 1 approved with reservations]

    Get PDF
    The khmer package is a freely available software library for working efficiently with fixed length DNA words, or k-mers. khmer provides implementations of a probabilistic k-mer counting data structure, a compressible De Bruijn graph representation, De Bruijn graph partitioning, and digital normalization. khmer is implemented in C++ and Python, and is freely available under the BSD license at https://github.com/dib-lab/khmer/

    Audiometer Machine Designed for Developing Countries

    No full text
    Team Flying Papaya has developed a portable, inexpensive device that measures hearing impairment of individuals in developing countries who may not have a noise-controlled setting. An audiometer machine is the all-encompassing name for our device, which includes a control box, a patient response system and sound-proof headphones. An Arduino Uno controls the components and is housed as the brain of the control box. The circuits for stimulating audio and controlling volume are also in the control box. The layout of the control box is easy to use for the untrained operator and does not require a large amount of power as opposed to market audiometer competitors. The patient response system is our method of informing the operator that the patient has or has not heard the audio. The system involves a left and right button for the patient that when pressed will light a green LED of the same name, and should no button be pressed a red LED will light indicating an error, requiring operator response. It is a more efficient method of communicating patient responses with any untrained operator by removing discrepancies from patient position and operator judgment. The design of our sound-proof headphones is a double cup system utilizing earmuffs and headphones. These sound-proof headphones prevent outside sound interference and transmits audio. Headphones used with audiometers require a quiet testing environment, whereas these sound-proof headphones are suitable with natural ambient noise. This solution acknowledges the constraints by pursuing a low-cost and easy to use system. We utilized cheap prototyping methods with breadboards and 3D-printers

    Audiometer Machine Designed for Developing Countries

    No full text
    Team Flying Papaya has developed a portable, inexpensive device that measures hearing impairment of individuals in developing countries who may not have a noise-controlled setting. An audiometer machine is the all-encompassing name for our device, which includes a control box, a patient response system and sound-proof headphones. An Arduino Uno controls the components and is housed as the brain of the control box. The circuits for stimulating audio and controlling volume are also in the control box. The layout of the control box is easy to use for the untrained operator and does not require a large amount of power as opposed to market audiometer competitors. The patient response system is our method of informing the operator that the patient has or has not heard the audio. The system involves a left and right button for the patient that when pressed will light a green LED of the same name, and should no button be pressed a red LED will light indicating an error, requiring operator response. It is a more efficient method of communicating patient responses with any untrained operator by removing discrepancies from patient position and operator judgment. The design of our sound-proof headphones is a double cup system utilizing earmuffs and headphones. These sound-proof headphones prevent outside sound interference and transmits audio. Headphones used with audiometers require a quiet testing environment, whereas these sound-proof headphones are suitable with natural ambient noise. This solution acknowledges the constraints by pursuing a low-cost and easy to use system. We utilized cheap prototyping methods with breadboards and 3D-printers

    Intraperitoneal chemotherapy among women in the Medicare population with epithelial ovarian cancer.

    No full text
    BACKGROUND: Intraperitoneal combined with intravenous chemotherapy (IV/IP) for primary treatment of epithelial ovarian cancer results in a substantial survival advantage for women who are optimally debulked surgically, compared with standard IV only therapy (IV). Little is known about the use of this therapy in the Medicare population. METHODS: We used the Surveillance, Epidemiology, and End Results (SEER)-Medicare database to identify 4665 women aged 66 and older with epithelial ovarian cancer diagnosed between 2005-2009, with their Medicare claims. We defined receipt of any IV/IP chemotherapy when there was claims evidence of any receipt of such treatment within 12 months of the date of diagnosis. We used descriptive statistics to examine factors associated with treatment and health services use. RESULTS: Among 3561 women with Stage III or IV epithelial ovarian cancer who received any chemotherapy, only 124 (3.5%) received IV/IP chemotherapy. The use of IV/IP chemotherapy did not increase over the period of the study. In this cohort, younger women, those with fewer comorbidities, whites, and those living in Census tracts with higher income were more likely to receive IV/IP chemotherapy. Among women who received any IV/IP chemotherapy, we did not find an increase in acute care services (hospitalizations, emergency department visits, or ICU stays). CONCLUSION: During the period between 2005 and 2009, few women in the Medicare population living within observed SEER areas received IV/IP chemotherapy, and the use of this therapy did not increase. We observed marked racial and sociodemographic differences in access to this therapy
    corecore