8 research outputs found

    Deep Point Correlation Design

    Get PDF
    Designing point patterns with desired properties can require substantial effort, both in hand-crafting coding and mathematical derivation. Retaining these properties in multiple dimensions or for a substantial number of points can be challenging and computationally expensive. Tackling those two issues, we suggest to automatically generate scalable point patterns from design goals using deep learning. We phrase pattern generation as a deep composition of weighted distance-based unstructured filters. Deep point pattern design means to optimize over the space of all such compositions according to a user-provided point correlation loss, a small program which measures a pattern’s fidelity in respect to its spatial or spectral statistics, linear or non-linear (e. g., radial) projections, or any arbitrary combination thereof. Our analysis shows that we can emulate a large set of existing patterns (blue, green, step, projective, stair, etc.-noise), generalize them to countless new combinations in a systematic way and leverage existing error estimation formulations to generate novel point patterns for a user-provided class of integrand functions. Our point patterns scale favorably to multiple dimensions and numbers of points: we demonstrate nearly 10 k points in 10-D produced in one second on one GPU. All the resources (source code and the pre-trained networks) can be found at https://sampling.mpi-inf.mpg.de/deepsampling.html

    Patternshop: Editing Point Patterns by Image Manipulation

    Full text link
    Point patterns are characterized by their density and correlation. While spatial variation of density is well-understood, analysis and synthesis of spatially-varying correlation is an open challenge. No tools are available to intuitively edit such point patterns, primarily due to the lack of a compact representation for spatially varying correlation. We propose a low-dimensional perceptual embedding for point correlations. This embedding can map point patterns to common three-channel raster images, enabling manipulation with off-the-shelf image editing software. To synthesize back point patterns, we propose a novel edge-aware objective that carefully handles sharp variations in density and correlation. The resulting framework allows intuitive and backward-compatible manipulation of point patterns, such as recoloring, relighting to even texture synthesis that have not been available to 2D point pattern design before. Effectiveness of our approach is tested in several user experiments.Comment: 14 pages, 16 figure

    Blue Noise Plots

    Get PDF
    We propose Blue Noise Plots, two-dimensional dot plots that depict data points of univariate data sets. While often one-dimensional strip plots are used to depict such data, one of their main problems is visual clutter which results from overlap. To reduce this overlap, jitter plots were introduced, whereby an additional, non-encoding plot dimension is introduced, along which the data point representing dots are randomly perturbed. Unfortunately, this randomness can suggest non-existent clusters, and often leads to visually unappealing plots, in which overlap might still occur. To overcome these shortcomings, we introduce BlueNoise Plots where random jitter along the non-encoding plot dimension is replaced by optimizing all dots to keep a minimum distance in 2D i. e., Blue Noise. We evaluate the effectiveness as well as the aesthetics of Blue Noise Plots through both, a quantitative and a qualitative user study

    Discovering Pattern Structure Using Differentiable Compositing

    Get PDF
    Patterns, which are collections of elements arranged in regular or near-regular arrangements, are an important graphic art form and widely used due to their elegant simplicity and aesthetic appeal. When a pattern is encoded as a flat image without the underlying structure, manually editing the pattern is tedious and challenging as one has to both preserve the individual element shapes and their original relative arrangements. State-of-the-art deep learning frameworks that operate at the pixel level are unsuitable for manipulating such patterns. Specifically, these methods can easily disturb the shapes of the individual elements or their arrangement, and thus fail to preserve the latent structures of the input patterns. We present a novel differentiable compositing operator using pattern elements and use it to discover structures, in the form of a layered representation of graphical objects, directly from raw pattern images. This operator allows us to adapt current deep learning based image methods to effectively handle patterns. We evaluate our method on a range of patterns and demonstrate superiority in the context of pattern manipulations when compared against state-of-the-art pixel- or point-based alternatives

    Deep Point Correlation Design

    Get PDF
    Designing point patterns with desired properties can require substantial effort, both in hand-crafting coding and mathematical derivation. Retaining these properties in multiple dimensions or for a substantial number of points can be challenging and computationally expensive. Tackling those two issues, we suggest to automatically generate scalable point patterns from design goals using deep learning. We phrase pattern generation as a deep composition of weighted distance-based unstructured filters. Deep point pattern design means to optimize over the space of all such compositions according to a user-provided point correlation loss, a small program which measures a pattern’s fidelity in respect to its spatial or spectral statistics, linear or non-linear (e. g., radial) projections, or any arbitrary combination thereof. Our analysis shows that we can emulate a large set of existing patterns (blue, green, step, projective, stair, etc.-noise), generalize them to countless new combinations in a systematic way and leverage existing error estimation formulations to generate novel point patterns for a user-provided class of integrand functions. Our point patterns scale favorably to multiple dimensions and numbers of points: we demonstrate nearly 10 k points in 10-D produced in one second on one GPU. All the resources (source code and the pre-trained networks) can be found at https://sampling.mpi-inf.mpg.de/deepsampling.html
    corecore