433 research outputs found
Lagrangian Neural Style Transfer for Fluids
Artistically controlling the shape, motion and appearance of fluid
simulations pose major challenges in visual effects production. In this paper,
we present a neural style transfer approach from images to 3D fluids formulated
in a Lagrangian viewpoint. Using particles for style transfer has unique
benefits compared to grid-based techniques. Attributes are stored on the
particles and hence are trivially transported by the particle motion. This
intrinsically ensures temporal consistency of the optimized stylized structure
and notably improves the resulting quality. Simultaneously, the expensive,
recursive alignment of stylization velocity fields of grid approaches is
unnecessary, reducing the computation time to less than an hour and rendering
neural flow stylization practical in production settings. Moreover, the
Lagrangian representation improves artistic control as it allows for
multi-fluid stylization and consistent color transfer from images, and the
generality of the method enables stylization of smoke and liquids likewise.Comment: ACM Transaction on Graphics (SIGGRAPH 2020), additional materials:
http://www.byungsoo.me/project/lnst/index.htm
Implicit Brushes for Stylized Line-based Rendering
International audienceWe introduce a new technique called Implicit Brushes to render animated 3D scenes with stylized lines in real-time with temporal coherence. An Implicit Brush is defined at a given pixel by the convolution of a brush footprint along a feature skeleton; the skeleton itself is obtained by locating surface features in the pixel neighborhood. Features are identified via image-space ïŹtting techniques that not only extract their location, but also their proïŹle, which permits to distinguish between sharp and smooth features. ProïŹle parameters are then mapped to stylistic parameters such as brush orientation, size or opacity to give rise to a wide range of line-based styles
Comparing parameterizations of pitch register and its discontinuities at prosodic boundaries for Hungarian
We examined how well prosodic boundary strength can be captured by two declination stylization methods as well as by four different representations of pitch register. In the stylization proposed by Liebermann et al. (1985) base- and topline are fitted to peaks and valleys of the pitch contour, whereas in Reichel&MĂĄdy (2013) these lines are fitted to medians below and above certain pitch percentiles. From each of the stylizations four feature pools were induced representing different aspects of register discontinuity at word boundaries: discontinuities related to the base-, mid-, and topline, as well as to the range between base- and topline. Concerning stylization the median-based fitting approach turned out to be more robust with respect to declination line crossing errors and yielded base-, topline and range-related discontinuity characteristics with higher correlations to perceived boundary strength. Concerning register representation, for the peak/valley fitting approach the base- and
topline patterns showed weaker correspondences to boundary strength than the other feature pools. We furthermore trained generalized linear regression models for boundary strength prediction on each feature pool. It turned out that neither the stylization method nor the register representation had a significant influence on the overall good prediction performance
SelectionConv: Convolutional Neural Networks for Non-rectilinear Image Data
Convolutional Neural Networks have revolutionized vision applications. There
are image domains and representations, however, that cannot be handled by
standard CNNs (e.g., spherical images, superpixels). Such data are usually
processed using networks and algorithms specialized for each type. In this
work, we show that it may not always be necessary to use specialized neural
networks to operate on such spaces. Instead, we introduce a new structured
graph convolution operator that can copy 2D convolution weights, transferring
the capabilities of already trained traditional CNNs to our new graph network.
This network can then operate on any data that can be represented as a
positional graph. By converting non-rectilinear data to a graph, we can apply
these convolutions on these irregular image domains without requiring training
on large domain-specific datasets. Results of transferring pre-trained image
networks for segmentation, stylization, and depth prediction are demonstrated
for a variety of such data forms.Comment: To be presented at ECCV 202
- âŠ