27 research outputs found
An output-sensitive algorithm for the minimization of 2-dimensional String Covers
String covers are a powerful tool for analyzing the quasi-periodicity of
1-dimensional data and find applications in automata theory, computational
biology, coding and the analysis of transactional data. A \emph{cover} of a
string is a string for which every letter of lies within some
occurrence of . String covers have been generalized in many ways, leading to
\emph{k-covers}, \emph{-covers}, \emph{approximate covers} and were
studied in different contexts such as \emph{indeterminate strings}.
In this paper we generalize string covers to the context of 2-dimensional
data, such as images. We show how they can be used for the extraction of
textures from images and identification of primitive cells in lattice data.
This has interesting applications in image compression, procedural terrain
generation and crystallography
Foundry: Hierarchical Material Design for Multi-Material Fabrication
We demonstrate a new approach for designing functional material definitions for multi-material fabrication using our system called Foundry. Foundry provides an interactive and visual process for hierarchically designing spatially-varying material properties (e.g., appearance, mechanical, optical). The resulting meta-materials exhibit structure at the micro and macro level and can surpass the qualities of traditional composites. The material definitions are created by composing a set of operators into an operator graph. Each operator performs a volume decomposition operation, remaps space, or constructs and assigns a material composition. The operators are implemented using a domain-specific language for multi-material fabrication; users can easily extend the library by writing their own operators. Foundry can be used to build operator graphs that describe complex, parameterized, resolution-independent, and reusable material definitions. We also describe how to stage the evaluation of the final material definition which in conjunction with progressive refinement, allows for interactive material evaluation even for complex designs. We show sophisticated and functional parts designed with our system.National Science Foundation (U.S.) (1138967)National Science Foundation (U.S.) (1409310)National Science Foundation (U.S.) (1547088)National Science Foundation (U.S.). Graduate Research Fellowship ProgramMassachusetts Institute of Technology. Undergraduate Research Opportunities Progra
Synthèse de textures par l’exemple pour les applications interactives
Millions of individuals explore virtual worlds every day, for entertainment, training, or to plan business trips and vacations. Video games such as Eve Online, World of Warcraft, and many others popularized their existence. Sand boxes such as Minecraft and Second Life illustrated how they can serve as a media, letting people create, share and even sell their virtual productions. Navigation and exploration software such as Google Earth and Virtual Earth let us explore a virtual version of the real world, and let us enrich it with information shared between the millions of users using these services every day.Virtual environments are massive, dynamic 3D scenes, that are explored and manipulated interactively bythousands of users simultaneously. Many challenges have to be solved to achieve these goals. Among those lies the key question of content management. How can we create enough detailed graphical content so as to represent an immersive, convincing and coherent world? Even if we can produce this data, how can we then store the terra–bytes it represents, and transfer it for display to each individual users? Rich virtual environments require a massive amount of varied graphical content, so as to represent an immersive, convincing and coherent world. Creating this content is extremely time consuming for computer artists and requires a specific set of technical skills. Capturing the data from the real world can simplify this task but then requires a large quantity of storage, expensive hardware and long capture campaigns. While this is acceptable for important landmarks (e.g. the statue of Liberty in New York, the Eiffel tower in Paris) this is wasteful on generic or anonymous landscapes. In addition, in many cases capture is not an option, either because an imaginary scenery is required or because the scene to be represented no longer exists. Therefore, researchers have proposed methods to generate new content programmatically, using captured data as an example. Typically, building blocks are extracted from the example content and re–assembled to form new assets. Such approaches have been at the center of my research for the past ten years. However, algorithms for generating data programmatically only partially address the content management challenge: the algorithm generates content as a (slow) pre–process and its output has to be stored for later use. On the contrary, I have focused on proposing models and algorithms which can produce graphical content while minimizing storage. The content is either generated when it is needed for the current viewpoint, or is produced under a very compact form that can be later used for rendering. Thanks to such approaches developers gain time during content creation, but this also simplifies the distribution of the content by reducing the required data bandwidth.In addition to the core problem of content synthesis, my approaches required the development of new data-structures able to store sparse data generated during display, while enabling an efficient access. These data-structures are specialized for the massive parallelism of graphics processors. I contributed early in this domain and kept a constant focus on this area. The originality of my approach has thus been to consider simultaneously the problems of generating, storing and displaying the graphical content. As we shall see, each of these area involve different theoretical and technical backgrounds, that nicely complement each other in providing elegant solutions to content generation, management and display
Variational Texture Synthesis with Sparsity and Spectrum Constraints
International audienceThis paper introduces a new approach for texture synthesis. We propose a unified framework that both imposes first order statistical constraints on the use of atoms from an adaptive dictionary, as well as second order constraints on pixel values. This is achieved thanks to a variational approach, the minimization of which yields local extrema, each one being a possible texture synthesis. On the one hand, the adaptive dictionary is created using a sparse image representation rationale, and a global constraint is imposed on the maximal number of use of each atom from this dictionary. On the other hand, a constraint on second order pixel statistics is achieved through the power spectrum of images. An advantage of the proposed method is its ability to truly synthesize textures, without verbatim copy of small pieces from the exemplar. In an extensive experimental section, we show that the resulting synthesis achieves state of the art results, both for structured and small scale textures
Genetic selection of parametric scenes
Using a modelling package such as Alias Maya or SoftImage XSi to create a natural scene is too tedious to be practical. Procedural generation techniques reduce the amount of work involved, but there may still be too many parameters to be selected manually. We propose a new method of generating natural scenes, using a genetic algorithm (GA) to infer the user’s preferences from user feedback. In order to allow the goal to be reached in a reasonable time, the GA must converge quickly. The scene generation and display preprocessing must also be efficient. We present techniques that attain these goals while still producing reasonable quality output and interactive frame-rates. We also compare this approach to having a user manually select parameters
State of the Art in Example-based Texture Synthesis
International audienceRecent years have witnessed significant progress in example-based texture synthesis algorithms. Given an example texture, these methods produce a larger texture that is tailored to the user's needs. In this state-of-the-art report, we aim to achieve three goals: (1) provide a tutorial that is easy to follow for readers who are not already familiar with the subject, (2) make a comprehensive survey and comparisons of different methods, and (3) sketch a vision for future work that can help motivate and guide readers that are interested in texture synthesis research. We cover fundamental algorithms as well as extensions and applications of texture synthesis