8 research outputs found

    Automated pebble mosaic stylization of images

    Get PDF
    Digital mosaics have usually used regular tiles, simulating the historical "tessellated" mosaics. In this paper, we present a method for synthesizing pebble mosaics, a historical mosaic style in which the tiles are rounded pebbles. We address both the tiling problem, where pebbles are distributed over the image plane so as to approximate the input image content, and the problem of geometry, creating a smooth rounded shape for each pebble. We adapt SLIC, simple linear iterative clustering, to obtain elongated tiles conforming to image content, and smooth the resulting irregular shapes into shapes resembling pebble cross-sections. Then, we create an interior and exterior contour for each pebble and solve a Laplace equation over the region between them to obtain height-field geometry. The resulting pebble set approximates the input image while presenting full geometry that can be rendered and textured for a highly detailed representation of a pebble mosaic

    Computational Design and Optimization of Non-Circular Gears

    Get PDF
    We study a general form of gears known as non‐circular gears that can transfer periodic motion with variable speed through their irregular shapes and eccentric rotation centers. To design functional non‐circular gears is nontrivial, since the gear pair must have compatible shape to keep in contact during motion, so the driver gear can push the follower to rotate via a bounded torque that the motor can exert. To address the challenge, we model the geometry, kinematics, and dynamics of non‐circular gears, formulate the design problem as a shape optimization, and identify necessary independent variables in the optimization search. Taking a pair of 2D shapes as inputs, our method optimizes them into gears by locating the rotation center on each shape, minimally modifying each shape to form the gear's boundary, and constructing appropriate teeth for gear meshing. Our optimized gears not only resemble the inputs but can also drive the motion with relatively small torque. We demonstrate our method's usability by generating a rich variety of non‐circular gears from various inputs and 3D printing several of the

    Learning Gradient Fields for Scalable and Generalizable Irregular Packing

    Full text link
    The packing problem, also known as cutting or nesting, has diverse applications in logistics, manufacturing, layout design, and atlas generation. It involves arranging irregularly shaped pieces to minimize waste while avoiding overlap. Recent advances in machine learning, particularly reinforcement learning, have shown promise in addressing the packing problem. In this work, we delve deeper into a novel machine learning-based approach that formulates the packing problem as conditional generative modeling. To tackle the challenges of irregular packing, including object validity constraints and collision avoidance, our method employs the score-based diffusion model to learn a series of gradient fields. These gradient fields encode the correlations between constraint satisfaction and the spatial relationships of polygons, learned from teacher examples. During the testing phase, packing solutions are generated using a coarse-to-fine refinement mechanism guided by the learned gradient fields. To enhance packing feasibility and optimality, we introduce two key architectural designs: multi-scale feature extraction and coarse-to-fine relation extraction. We conduct experiments on two typical industrial packing domains, considering translations only. Empirically, our approach demonstrates spatial utilization rates comparable to, or even surpassing, those achieved by the teacher algorithm responsible for training data generation. Additionally, it exhibits some level of generalization to shape variations. We are hopeful that this method could pave the way for new possibilities in solving the packing problem

    PAVEL: Decorative Patterns with Packed Volumetric Elements

    Full text link
    Many real-world hand-crafted objects are decorated with elements that are packed onto the object's surface and deformed to cover it as much as possible. Examples are artisanal ceramics and metal jewelry. Inspired by these objects, we present a method to enrich surfaces with packed volumetric decorations. Our algorithm works by first determining the locations in which to add the decorative elements and then removing the non-physical overlap between them while preserving the decoration volume. For the placement, we support several strategies depending on the desired overall motif. To remove the overlap, we use an approach based on implicit deformable models creating the qualitative effect of plastic warping while avoiding expensive and hard-to-control physical simulations. Our decorative elements can be used to enhance virtual surfaces, as well as 3D-printed pieces, by assembling the decorations onto real-surfaces to obtain tangible reproductions.Comment: 11 page

    Autocomplete element fields and interactive synthesis system development for aggregate applications.

    Get PDF
    Aggregate elements are ubiquitous in natural and man-made objects and have played an important role in the application of graphics, design and visualization. However, to efficiently arrange these aggregate elements with varying anisotropy and deformability still remains challenging, in particular in 3D environments. To overcome such a thorny issue, we thus introduce autocomplete element fields, including an element distribution formulation that can effectively cope with diverse output compositions with controllable element distributions in high production standard and efficiency as well as an element field formulation that can smoothly orient all the synthesized elements following given inputs, such as scalar or direction fields. The pro- posed formulations can not only properly synthesize distinct types of aggregate elements across various domain spaces without incorporating any extra process but also directly compute complete element fields from partial specifications without requiring fully specified inputs in any algorithmic step. Furthermore, in order to reduce input workload and enhance output quality for better usability and interactivity, we further develop an interactive synthesis system, centered on the idea of our autocomplete element fields, to facilitate the creation of element aggregations within different output do- mains. Analogous to conventional painting workflows, through a palette- based brushing interface, users can interactively mix and place a few aggregate elements over a brushing canvas and let our system automatically populate more aggregate elements with intended orientations and scales for the rest of outcome. The developed system can empower the users to iteratively design a variety of novel mixtures with reduced workload and enhanced quality under an intuitive and user-friendly brushing workflow with- out the necessity of a great deal of manual labor or technical expertise. We validate our prototype system with a pilot user study and exhibit its application in 2D graphic design, 3D surface collage, and 3D aggregate modeling

    Leaming Visual Appearance: Perception, Modeling and Editing.

    Get PDF
    La apariencia visual determina como entendemos un objecto o imagen, y, por tanto, es un aspecto fundamental en la creación de contenido digital. Es un término general, englobando otros como la apariencia de los materiales, definida como la impresión que tenemos de un material, y la cual supone una interacción física entre luz y materia, y como nuestro sistema visual es capaz de percibirla. Sin embargo, modelar computacionalmente el comportamiento de nuestro sistema visual es una tarea difícil, entre otros motivos porque no existe una teoría definitiva y unificada sobre la percepción visual humana. Además, aunque hemos desarrollado algoritmos capaces de modelar fehacientemente la interacción entre luz y materia, existe una desconexión entre los parámetros físicos que usan estos algoritmos, y los parámetros perceptuales que el sistema visual humano entiende. Esto hace que manipular estas representaciones físicas, y sus interacciones, sea una tarea tediosa y costosa, incluso para usuarios expertos. Esta tesis busca mejorar nuestra comprensión de la percepción de la apariencia de materiales y usar dicho conocimiento para mejorar los algoritmos existentes para la generación de contenido visual. Específicamente, la tesis tiene contribuciones en tres áreas: proponiendo nuevos modelos computacionales para medir la similitud de apariencia; investigando la interacción entre iluminación y geometría; y desarrollando aplicaciones intuitivas para la manipulación de apariencia, en concreto, para el re-iluminado de humanos y para editar la apariencia de materiales.Una primera parte de la tesis explora métodos para medir la similaridad de apariencia. Ser capaces de medir cómo de similares son dos materiales, o imágenes, es un problema clásico en campos de la computación visual como visión por computador o informática gráfica. Abordamos primero el problema de similaridad en la apariencia de materiales. Proponemos un método basado en deep learning que combina imágenes con juicios subjetivos sobre la similitud de materiales, recogidos mediante estudios de usuario. Por otro lado, se explora el problema de la similaridad entre iconos. En este segundo caso, se hace uso de redes neuronales siamesas, y el estilo y la identidad que dan los artistas juega un papel clave en dicha medida de similaridad. La segunda parte avanza en la comprensión de cómo los factores de confusión (confounding factors) afectan a nuestra percepción de la apariencia de los materiales. Dos factores de confusión claves son la geometría de los objetos y la iluminación de la escena. Comenzamos investigando el efecto de dichos factores a la hora de reconocer los materiales a través de diversos experimentos y estudios estadísticos. También investigamos el efecto del movimiento del objeto en la percepción de la apariencia de materiales.En la tercera parte exploramos aplicaciones intuitivas para la manipulación de la apariencia visual. Primero, abordamos el problema de la re-iluminación de humanos. Proponemos una nueva formulación del problema, y basándonos en ella, se diseña y entrena un modelo basado en redes neuronales profundas para re-iluminar una escena. Por último, abordamos el problema de la edición intuitiva de materiales. Para ello, recopilamos juicios humanos sobre la percepción de diferentes atributos y presentamos un modelo, basado en redes neuronales profundas, capaz de editar materiales de forma realista simplemente variando el valor de los atributos recogidos.<br /
    corecore