419 research outputs found
Building BROOK: A multi-modal and facial video database for Human-Vehicle Interaction research
With the growing popularity of Autonomous Vehicles, more opportunities have bloomed in the context of Human-Vehicle Interactions. However, the lack of comprehensive and concrete database support for such specific use case limits relevant studies in the whole design spaces. In this paper, we present our work-in-progress BROOK, a public multi-modal database with facial video records, which could be used to characterise drivers' affective states and driving styles. We first explain how we over-engineer such database in details, and what we have gained through a ten-month study. Then we showcase a Neural Network-based predictor, leveraging BROOK, which supports multi-modal prediction (including physiological data of heart rate and skin conductance and driving status data of speed) through facial videos. Finally we discuss related issues when building such a database and our future directions in the context of BROOK. We believe BROOK is an essential building block for future Human-Vehicle Interaction Research. More details and updates about the project BROOK is online at https: //unnc-idl-ucc.github.io/BROOK/
Thermal Fluctuations and Rubber Elasticity
The effects of thermal elastic fluctuations in rubber materials are examined.
It is shown that, due to an interplay with the incompressibility constraint,
these fluctuations qualitatively modify the large-deformation stress-strain
relation, compared to that of classical rubber elasticity. To leading order,
this mechanism provides a simple and generic explanation for the peak structure
of Mooney-Rivlin stress-strain relation, and shows a good agreement with
experiments. It also leads to the prediction of a phonon correlation function
that depends on the external deformation.Comment: 4 RevTeX pages, 1 figure, submitted to PR
3DPortraitGAN: Learning One-Quarter Headshot 3D GANs from a Single-View Portrait Dataset with Diverse Body Poses
3D-aware face generators are typically trained on 2D real-life face image
datasets that primarily consist of near-frontal face data, and as such, they
are unable to construct one-quarter headshot 3D portraits with complete head,
neck, and shoulder geometry. Two reasons account for this issue: First,
existing facial recognition methods struggle with extracting facial data
captured from large camera angles or back views. Second, it is challenging to
learn a distribution of 3D portraits covering the one-quarter headshot region
from single-view data due to significant geometric deformation caused by
diverse body poses. To this end, we first create the dataset
360{\deg}-Portrait-HQ (360{\deg}PHQ for short) which consists of high-quality
single-view real portraits annotated with a variety of camera parameters (the
yaw angles span the entire 360{\deg} range) and body poses. We then propose
3DPortraitGAN, the first 3D-aware one-quarter headshot portrait generator that
learns a canonical 3D avatar distribution from the 360{\deg}PHQ dataset with
body pose self-learning. Our model can generate view-consistent portrait images
from all camera angles with a canonical one-quarter headshot 3D representation.
Our experiments show that the proposed framework can accurately predict
portrait body poses and generate view-consistent, realistic portrait images
with complete geometry from all camera angles
- …