141 research outputs found

    Repeatable texture sampling with interchangeable patches

    Get PDF
    Rendering textures in real-time environments is a key task in computer graphics. This paper presents a new parallel patch-based method which allows repeatable sampling without cache, and does not create visual repetitions. Interchangeable patches of arbitrary shape are prepared in a preprocessing step, such that patches may lie over the boundary of other patches in a repeating tile. This compresses the example texture into an infinite texture map with small memory requirements, suitable for GPU and ray-tracing applications. The quality of textures rendered with this method can be tuned in the offline preprocessing step, and they can then be rendered in times comparable to Wang tiles. Experimental results demonstrate combined benefits in speed, memory requirements, and quality of randomisation when compared to previous methods

    Computing layouts with deformable templates

    Get PDF

    Modular-topology optimization of structures and mechanisms with free material design and clustering

    Full text link
    Topology optimization of modular structures and mechanisms enables balancing the performance of automatically-generated individualized designs, as required by Industry 4.0, with enhanced sustainability by means of component reuse. For optimal modular design, two key questions must be answered: (i) what should the topology of individual modules be like and (ii) how should modules be arranged at the product scale? We address these challenges by proposing a bi-level sequential strategy that combines free material design, clustering techniques, and topology optimization. First, using free material optimization enhanced with post-processing for checkerboard suppression, we determine the distribution of elasticity tensors at the product scale. To extract the sought-after modular arrangement, we partition the obtained elasticity tensors with a novel deterministic clustering algorithm and interpret its outputs within Wang tiling formalism. Finally, we design interiors of individual modules by solving a single-scale topology optimization problem with the design space reduced by modular mapping, conveniently starting from an initial guess provided by free material optimization. We illustrate these developments with three benchmarks first, covering compliance minimization of modular structures, and, for the first time, the design of non-periodic compliant modular mechanisms. Furthermore, we design a set of modules reusable in an inverter and in gripper mechanisms, which ultimately pave the way towards the rational design of modular architectured (meta)materials.Comment: 30 page

    Constrained random sampling and gap filling technique for near-regular texture synthesis

    Get PDF
    Projecte realitzat mitjançant programa de mobilitat. TECHNISCHE UNIVERSITÄT BERLIN. FAKULTÄT ELEKTROTECHNIK UND INFORMATIK. INSTITUT FÜR TECHNISCHE INFORMATIK UND MIKROELEKTRONIK COMPUTER VISION AND REMOTE SENSINGThis thesis addresses the synthesis of near-regular textures, i.e. textures that consist of a regular global structure plus subtle yet very characteristic stochastic irregularities, from a small exemplar image. Such textures are difficult to synthesize due to the complementary characteristics of these structures. The main purpose of this thesis is to present a novel method which we call Random Sampling and Gap Filling (RSGF) to synthesize near-regular textures. The synthesis approach is guided by a lattice of the global structure estimated from a generalized normalized autocorrelation of the sample image. This lattice constrains a random sampling process to maintain the global regular structure yet ensuring the characteristic randomness of the irregular structures. An alternative method to find the piece of texture within the input sample whose simple tiling presents less visible seams is also presented for illustration of quality enhancement purposes. Results presented in this work show that our method does not only produce convincing results for regular or near-regular textures but also for irregular textures

    High quality texture synthesis

    Get PDF
    Texture synthesis is a core process in Computer Graphics and design. It is used extensively in a wide range of applications, including computer games, virtual environments, manufacturing, and rendering. This thesis investigates a novel approach to texture synthesis in order to significantly improve speed, memory requirements, and quality. An analysis of texture properties is created, to enable the gathering a representative dataset, and a qualitative evaluation of texture synthesis algorithms. A new algorithm to make non-repeating texture synthesis on-the-fly possible is developed, tested, and evaluated. This parallel patch-based method allows repeatable sampling without cache, without creating visually noticeable repetitions, as confirmed by a perceptive objective study on quality. In order to quantify the quality of existing algorithms and to facilitate further development in the field, desired texture properties are classified and analysed, and a minimal set of textures is created according to these properties to allow subjective evaluation of texture synthesis algorithms. This dataset is then used in a user study which evaluates the quality of texture synthesis algorithms. For the first time in the field of texture synthesis, statistically significant findings quantify the quality of selected repeatable algorithms, and make it possible to evaluate new improved methods. Finally, in an effort to make these findings applicable in the British tile manufacturing industry, the developed texture synthesis technology is made available to Johnson Tiles

    Synthèse de textures par l’exemple pour les applications interactives

    Get PDF
    Millions of individuals explore virtual worlds every day, for entertainment, training, or to plan business trips and vacations. Video games such as Eve Online, World of Warcraft, and many others popularized their existence. Sand boxes such as Minecraft and Second Life illustrated how they can serve as a media, letting people create, share and even sell their virtual productions. Navigation and exploration software such as Google Earth and Virtual Earth let us explore a virtual version of the real world, and let us enrich it with information shared between the millions of users using these services every day.Virtual environments are massive, dynamic 3D scenes, that are explored and manipulated interactively bythousands of users simultaneously. Many challenges have to be solved to achieve these goals. Among those lies the key question of content management. How can we create enough detailed graphical content so as to represent an immersive, convincing and coherent world? Even if we can produce this data, how can we then store the terra–bytes it represents, and transfer it for display to each individual users? Rich virtual environments require a massive amount of varied graphical content, so as to represent an immersive, convincing and coherent world. Creating this content is extremely time consuming for computer artists and requires a specific set of technical skills. Capturing the data from the real world can simplify this task but then requires a large quantity of storage, expensive hardware and long capture campaigns. While this is acceptable for important landmarks (e.g. the statue of Liberty in New York, the Eiffel tower in Paris) this is wasteful on generic or anonymous landscapes. In addition, in many cases capture is not an option, either because an imaginary scenery is required or because the scene to be represented no longer exists. Therefore, researchers have proposed methods to generate new content programmatically, using captured data as an example. Typically, building blocks are extracted from the example content and re–assembled to form new assets. Such approaches have been at the center of my research for the past ten years. However, algorithms for generating data programmatically only partially address the content management challenge: the algorithm generates content as a (slow) pre–process and its output has to be stored for later use. On the contrary, I have focused on proposing models and algorithms which can produce graphical content while minimizing storage. The content is either generated when it is needed for the current viewpoint, or is produced under a very compact form that can be later used for rendering. Thanks to such approaches developers gain time during content creation, but this also simplifies the distribution of the content by reducing the required data bandwidth.In addition to the core problem of content synthesis, my approaches required the development of new data-structures able to store sparse data generated during display, while enabling an efficient access. These data-structures are specialized for the massive parallelism of graphics processors. I contributed early in this domain and kept a constant focus on this area. The originality of my approach has thus been to consider simultaneously the problems of generating, storing and displaying the graphical content. As we shall see, each of these area involve different theoretical and technical backgrounds, that nicely complement each other in providing elegant solutions to content generation, management and display

    Synthetic Data in Quantitative Scanning Probe Microscopy

    Get PDF
    Synthetic data are of increasing importance in nanometrology. They can be used for development of data processing methods, analysis of uncertainties and estimation of various measurement artefacts. In this paper we review methods used for their generation and the applications of synthetic data in scanning probe microscopy, focusing on their principles, performance, and applicability. We illustrate the benefits of using synthetic data on different tasks related to development of better scanning approaches and related to estimation of reliability of data processing methods. We demonstrate how the synthetic data can be used to analyse systematic errors that are common to scanning probe microscopy methods, either related to the measurement principle or to the typical data processing paths
    • …
    corecore