45,871 research outputs found
Image Sequence Stabilization Through Model Based Registration
Acquisition of image series using the digital camera gives a possibility to obtain high resolution/quality animation, much better than while using the digital camcorder. However, there are several problems to deal with when producing animation using such approach. Especially, if motion involves changes in observer position and spatial orientation, the resulting animation may turn out to look choppy and unsmooth. If there is no possibility to provide some hardware based stabilization of the camera during the motion, it is necessary to develop some image processing methods to obtain smooth animation. In this work we deal with the image sequence acquired without stabilization around an object. We propose a method that enables creation of smooth animation using the registration paradigm
Recommended from our members
View-dependent adaptive cloth simulation
This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method adjusts the criteria controlling refinement to account for visibility and apparent size in the camera's view. Objectionable dynamic artifacts are avoided by anticipative refinement and smoothed coarsening. This approach preserves the appearance of detailed cloth throughout the animation while avoiding the wasted effort of simulating details that would not be discernible to the viewer. The computational savings realized by this method increase as scene complexity grows, producing a 2× speed-up for a single character and more than 4× for a small group
Recommended from our members
Agile thinking in motion graphics practice and its potential for design education
Motion Graphics is relatively new subject and its methodologies are still being developed. There are useful lessons to be learnt from the practice in early cinema from the 1890's to the 1930's where Agile thinking was used by a number of practitioners including Fritz Lang. Recent studies in MA Motion Graphics have accessed some of this thinking incorporating them in a series of Motion Graphic tests and experiments culminating in a two minute animation “1896 Olympic Marathon”. This paper demonstrates how the project and its design methodology can contribute new knowledge for the practice and teaching of this relatively new and expanding area of Motion Graphic Design. This would be not only invaluable to the International community of Motion Graphic practitioners, Educators and Researchers in their development of this maturing field. But also to the broader Multidisciplinary disciplines within Design Education. These methodologies have been arrived at by accessing the work of creative and reflective practice as defined by Carol Grey and Julian Marlin in Visualizing Research (2004) and reflective practice as defined by Donald Schon (1983). Central to the investigation has been the approach of Agile thinking from the methodology of "Bricolage" by Levi Strauss "The Savage Mind" (1966)
Visual Importance-Biased Image Synthesis Animation
Present ray tracing algorithms are computationally intensive, requiring hours of computing time for complex scenes. Our previous work has dealt with the development of an overall approach to the application of visual attention to progressive and adaptive ray-tracing techniques. The approach facilitates large computational savings by modulating the supersampling rates in an image by the visual importance of the region being rendered. This paper extends the approach by incorporating temporal changes into the models and techniques developed, as it is expected that further efficiency savings can be reaped for animated scenes. Applications for this approach include entertainment, visualisation and simulation
EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment
Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time
Transport-Based Neural Style Transfer for Smoke Simulations
Artistically controlling fluids has always been a challenging task.
Optimization techniques rely on approximating simulation states towards target
velocity or density field configurations, which are often handcrafted by
artists to indirectly control smoke dynamics. Patch synthesis techniques
transfer image textures or simulation features to a target flow field. However,
these are either limited to adding structural patterns or augmenting coarse
flows with turbulent structures, and hence cannot capture the full spectrum of
different styles and semantically complex structures. In this paper, we propose
the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric
smoke data. Our method is able to transfer features from natural images to
smoke simulations, enabling general content-aware manipulations ranging from
simple patterns to intricate motifs. The proposed algorithm is physically
inspired, since it computes the density transport from a source input smoke to
a desired target configuration. Our transport-based approach allows direct
control over the divergence of the stylization velocity field by optimizing
incompressible and irrotational potentials that transport smoke towards
stylization. Temporal consistency is ensured by transporting and aligning
subsequent stylized velocities, and 3D reconstructions are computed by
seamlessly merging stylizations from different camera viewpoints.Comment: ACM Transaction on Graphics (SIGGRAPH ASIA 2019), additional
materials: http://www.byungsoo.me/project/neural-flow-styl
- …