7 research outputs found

    Self-Sampling for Neural Point Cloud Consolidation

    Full text link
    In this paper, we introduce a deep learning technique for consolidating and sharp feature generation of point clouds using only the input point cloud itself. Rather than explicitly define a prior that describes typical shape characteristics (i.e., piecewise-smoothness), or a heuristic policy for generating novel sharp points, we opt to learn both using a neural network with shared-weights. Instead of relying on a large collection of manually annotated data, we use the self-supervision present within a single shape, i.e., self-prior, to train the network, and learn the underlying distribution of sharp features specific to the given input point cloud. By learning to map a low-curvature subset of the input point cloud to a disjoint high-curvature subset, the network formalizes the shape-specific characteristics and infers how to generate sharp points. During test time, the network is repeatedly fed a random subset of points from the input and displaces them to generate an arbitrarily large set of novel sharp feature points. The local shared weights are optimized over the entire shape, learning non-local statistics and exploiting the recurrence of local-scale geometries. We demonstrate the ability to generate coherent sets of sharp feature points on a variety of shapes, while eliminating outliers and noise

    A Neural Space-Time Representation for Text-to-Image Personalization

    Full text link
    A key aspect of text-to-image personalization methods is the manner in which the target concept is represented within the generative process. This choice greatly affects the visual fidelity, downstream editability, and disk space needed to store the learned concept. In this paper, we explore a new text-conditioning space that is dependent on both the denoising process timestep (time) and the denoising U-Net layers (space) and showcase its compelling properties. A single concept in the space-time representation is composed of hundreds of vectors, one for each combination of time and space, making this space challenging to optimize directly. Instead, we propose to implicitly represent a concept in this space by optimizing a small neural mapper that receives the current time and space parameters and outputs the matching token embedding. In doing so, the entire personalized concept is represented by the parameters of the learned mapper, resulting in a compact, yet expressive, representation. Similarly to other personalization methods, the output of our neural mapper resides in the input space of the text encoder. We observe that one can significantly improve the convergence and visual fidelity of the concept by introducing a textual bypass, where our neural mapper additionally outputs a residual that is added to the output of the text encoder. Finally, we show how one can impose an importance-based ordering over our implicit representation, providing users control over the reconstruction and editability of the learned concept using a single trained model. We demonstrate the effectiveness of our approach over a range of concepts and prompts, showing our method's ability to generate high-quality and controllable compositions without fine-tuning any parameters of the generative model itself.Comment: Project page available at https://neuraltextualinversion.github.io/NeTI

    NeuralMLS: Geometry-Aware Control Point Deformation

    Full text link
    We introduce NeuralMLS, a space-based deformation technique, guided by a set of displaced control points. We leverage the power of neural networks to inject the underlying shape geometry into the deformation parameters. The goal of our technique is to enable a realistic and intuitive shape deformation. Our method is built upon moving least-squares (MLS), since it minimizes a weighted sum of the given control point displacements. Traditionally, the influence of each control point on every point in space (i.e., the weighting function) is defined using inverse distance heuristics. In this work, we opt to learn the weighting function, by training a neural network on the control points from a single input shape, and exploit the innate smoothness of neural networks. Our geometry-aware control point deformation is agnostic to the surface representation and quality; it can be applied to point clouds or meshes, including non-manifold and disconnected surface soups. We show that our technique facilitates intuitive piecewise smooth deformations, which are well suited for manufactured objects. We show the advantages of our approach compared to existing surface and space-based deformation techniques, both quantitatively and qualitatively.Comment: Eurographics 2022 Short Paper
    corecore