4,764 research outputs found
Shape Animation with Combined Captured and Simulated Dynamics
We present a novel volumetric animation generation framework to create new
types of animations from raw 3D surface or point cloud sequence of captured
real performances. The framework considers as input time incoherent 3D
observations of a moving shape, and is thus particularly suitable for the
output of performance capture platforms. In our system, a suitable virtual
representation of the actor is built from real captures that allows seamless
combination and simulation with virtual external forces and objects, in which
the original captured actor can be reshaped, disassembled or reassembled from
user-specified virtual physics. Instead of using the dominant surface-based
geometric representation of the capture, which is less suitable for volumetric
effects, our pipeline exploits Centroidal Voronoi tessellation decompositions
as unified volumetric representation of the real captured actor, which we show
can be used seamlessly as a building block for all processing stages, from
capture and tracking to virtual physic simulation. The representation makes no
human specific assumption and can be used to capture and re-simulate the actor
with props or other moving scenery elements. We demonstrate the potential of
this pipeline for virtual reanimation of a real captured event with various
unprecedented volumetric visual effects, such as volumetric distortion,
erosion, morphing, gravity pull, or collisions
Drivable 3D Gaussian Avatars
We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable
model for human bodies rendered with Gaussian splats. Current photorealistic
drivable avatars require either accurate 3D registrations during training,
dense input images during testing, or both. The ones based on neural radiance
fields also tend to be prohibitively slow for telepresence applications. This
work uses the recently presented 3D Gaussian Splatting (3DGS) technique to
render realistic humans at real-time framerates, using dense calibrated
multi-view videos as input. To deform those primitives, we depart from the
commonly used point deformation method of linear blend skinning (LBS) and use a
classic volumetric deformation method: cage deformations. Given their smaller
size, we drive these deformations with joint angles and keypoints, which are
more suitable for communication applications. Our experiments on nine subjects
with varied body shapes, clothes, and motions obtain higher-quality results
than state-of-the-art methods when using the same training and test data.Comment: Website: https://zielon.github.io/d3ga
Beaming into the Rat World: Enabling Real-Time Interaction between Rat and Human Each at Their Own Scale
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human’s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale
VolTeMorph: Realtime, Controllable and Generalisable Animation of Volumetric Representations
The recent increase in popularity of volumetric representations for scene
reconstruction and novel view synthesis has put renewed focus on animating
volumetric content at high visual quality and in real-time. While implicit
deformation methods based on learned functions can produce impressive results,
they are `black boxes' to artists and content creators, they require large
amounts of training data to generalise meaningfully, and they do not produce
realistic extrapolations outside the training data. In this work we solve these
issues by introducing a volume deformation method which is real-time, easy to
edit with off-the-shelf software and can extrapolate convincingly. To
demonstrate the versatility of our method, we apply it in two scenarios:
physics-based object deformation and telepresence where avatars are controlled
using blendshapes. We also perform thorough experiments showing that our method
compares favourably to both volumetric approaches combined with implicit
deformation and methods based on mesh deformation.Comment: 18 pages, 21 figure
A Typographic Dilemma: Reconciling the old with the new using a new cross-disciplinary typographic framework
Current theory and vocabulary used to describe typographic practice and scholarship are based on a historically print-derived framework. As yet, no new paradigm has emerged to address the divergent path that screen-based typography is taking from its traditional print medium. Screen-based typography is becoming as common and widely used as its print counterpart. It is now timely to re-evaluate current typographic references and practices under these environments, which introduces a new visual language and form.
This paper will attempt to present an alternate typographic framework to address these growing changes by appropriating concepts and knowledge from different disciplines. This alternate typographic framework has been informed through a study conducted as part of a research Doctorate in the School of Design at Northumbria University, UK. This paper posits that the current typographic framework derived from the print medium is no longer sufficient to address the growing differences between the print and screen media. In its place, an alternate cross-disciplinary typographic framework should be adopted for the successful integration and application of typography in screen-based interactive media. The development of this framework will focus mainly on three key characteristics of screen-based interactive media ¬¬– hypertext, interactivity and time-based motion – and will draw influences from disciplines such as film, computer gaming, interactive digital arts and hypertext fictions
- …