33 research outputs found

    SyntheWorld: A Large-Scale Synthetic Dataset for Land Cover Mapping and Building Change Detection

    Full text link
    Synthetic datasets, recognized for their cost effectiveness, play a pivotal role in advancing computer vision tasks and techniques. However, when it comes to remote sensing image processing, the creation of synthetic datasets becomes challenging due to the demand for larger-scale and more diverse 3D models. This complexity is compounded by the difficulties associated with real remote sensing datasets, including limited data acquisition and high annotation costs, which amplifies the need for high-quality synthetic alternatives. To address this, we present SyntheWorld, a synthetic dataset unparalleled in quality, diversity, and scale. It includes 40,000 images with submeter-level pixels and fine-grained land cover annotations of eight categories, and it also provides 40,000 pairs of bitemporal image pairs with building change annotations for building change detection task. We conduct experiments on multiple benchmark remote sensing datasets to verify the effectiveness of SyntheWorld and to investigate the conditions under which our synthetic data yield advantages. We will release SyntheWorld to facilitate remote sensing image processing research.Comment: Accepted by WACV 202

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    High quality texture synthesis

    Get PDF
    Texture synthesis is a core process in Computer Graphics and design. It is used extensively in a wide range of applications, including computer games, virtual environments, manufacturing, and rendering. This thesis investigates a novel approach to texture synthesis in order to significantly improve speed, memory requirements, and quality. An analysis of texture properties is created, to enable the gathering a representative dataset, and a qualitative evaluation of texture synthesis algorithms. A new algorithm to make non-repeating texture synthesis on-the-fly possible is developed, tested, and evaluated. This parallel patch-based method allows repeatable sampling without cache, without creating visually noticeable repetitions, as confirmed by a perceptive objective study on quality. In order to quantify the quality of existing algorithms and to facilitate further development in the field, desired texture properties are classified and analysed, and a minimal set of textures is created according to these properties to allow subjective evaluation of texture synthesis algorithms. This dataset is then used in a user study which evaluates the quality of texture synthesis algorithms. For the first time in the field of texture synthesis, statistically significant findings quantify the quality of selected repeatable algorithms, and make it possible to evaluate new improved methods. Finally, in an effort to make these findings applicable in the British tile manufacturing industry, the developed texture synthesis technology is made available to Johnson Tiles

    Real-time transition texture synthesis for terrains.

    Get PDF
    Depicting the transitions where differing material textures meet on a terrain surface presents a particularly unique set of challenges in the field of real-time rendering. Natural landscapes are inherently irregular and composed of complex interactions between many different material types of effectively endless detail and variation. Although consumer grade graphics hardware is becoming ever increasingly powerful with each successive generation, terrain texturing remains a trade-off between realism and the computational resources available. Technological constraints aside, there is still the challenge of generating the texture resources to represent terrain surfaces which can often span many hundreds or even thousands of square kilometres. To produce such textures by hand is often impractical when operating on a restricted budget of time and funding. This thesis presents two novel algorithms for generating texture transitions in realtime using automated processes. The first algorithm, Feature-Based Probability Blending (FBPB), automates the task of generating transitions between material textures containing salient features. As such features protrude through the terrain surface FBPB ensures that the topography of these features is maintained at transitions in a realistic manner. The transitions themselves are generated using a probabilistic process that also dynamically adds wear and tear to introduce high frequency detail and irregularity at the transition contour. The second algorithm, Dynamic Patch Transitions (DPT), extends FBPB by applying the probabilistic transition approach to material textures that contain no salient features. By breaking up texture space into a series of layered patches that are either rendered or discarded on a probabilistic basis, the contour of the transition is greatly increased in resolution and irregularity. When used in conjunction with high frequency detail techniques, such as alpha masking, DPT is capable of producing endless, detailed, irregular transitions without the need for artistic input

    A framework for local terrain deformation based on diffusion theory

    Get PDF
    Terrains have a key role in making outdoor virtual scenes believable and immersive as they form the support for every other natural element in the scene. Although important, terrains are often given limited interactivity in real-time applications. However, in nature, terrains are dynamic and interact with the rest of the environment changing shape on different levels, from tracks left by a person running on a gravel soil (micro-scale), to avalanches on the side of a mountain (macro-scale). The challenge in representing dynamic terrains correctly is that the soil that forms them is vastly heterogeneous and behaves differently depending on its composition. This heterogeneity introduces difficulties at different levels in dynamic terrains simulations, from modelling the large amount of different elements that compose the oil to simulating their dynamic behaviour. This work presents a novel framework to simulate multi-material dynamic terrains by taking into account the soil composition and its heterogeneity. In the proposed framework soil information is obtained from a material description map applied to the terrain mesh. This information is used to compute deformations in the area of interaction using a novel mathematical model based on diffusion theory. The deformations are applied to the terrain mesh in different ways depending on the distance of the area of interaction from the camera and the soil material. Deformations away from the camera are simulated by dynamically displacing normals. While deformations in a neighbourhood of the camera are represented by displacing the terrain mesh, which is locally tessellated to better fit the displacement. For gravel based soils the terrain details are added near the camera by reconstructing the meshes of the small rocks from the texture image, thus simulating both micro and macro-structure of the terrain. The outcome of the framework is a realistic interactive dynamic terrain animation in real-time

    State of the Art in Example-based Texture Synthesis

    Get PDF
    International audienceRecent years have witnessed significant progress in example-based texture synthesis algorithms. Given an example texture, these methods produce a larger texture that is tailored to the user's needs. In this state-of-the-art report, we aim to achieve three goals: (1) provide a tutorial that is easy to follow for readers who are not already familiar with the subject, (2) make a comprehensive survey and comparisons of different methods, and (3) sketch a vision for future work that can help motivate and guide readers that are interested in texture synthesis research. We cover fundamental algorithms as well as extensions and applications of texture synthesis

    Autocomplete element fields and interactive synthesis system development for aggregate applications.

    Get PDF
    Aggregate elements are ubiquitous in natural and man-made objects and have played an important role in the application of graphics, design and visualization. However, to efficiently arrange these aggregate elements with varying anisotropy and deformability still remains challenging, in particular in 3D environments. To overcome such a thorny issue, we thus introduce autocomplete element fields, including an element distribution formulation that can effectively cope with diverse output compositions with controllable element distributions in high production standard and efficiency as well as an element field formulation that can smoothly orient all the synthesized elements following given inputs, such as scalar or direction fields. The pro- posed formulations can not only properly synthesize distinct types of aggregate elements across various domain spaces without incorporating any extra process but also directly compute complete element fields from partial specifications without requiring fully specified inputs in any algorithmic step. Furthermore, in order to reduce input workload and enhance output quality for better usability and interactivity, we further develop an interactive synthesis system, centered on the idea of our autocomplete element fields, to facilitate the creation of element aggregations within different output do- mains. Analogous to conventional painting workflows, through a palette- based brushing interface, users can interactively mix and place a few aggregate elements over a brushing canvas and let our system automatically populate more aggregate elements with intended orientations and scales for the rest of outcome. The developed system can empower the users to iteratively design a variety of novel mixtures with reduced workload and enhanced quality under an intuitive and user-friendly brushing workflow with- out the necessity of a great deal of manual labor or technical expertise. We validate our prototype system with a pilot user study and exhibit its application in 2D graphic design, 3D surface collage, and 3D aggregate modeling

    A Framework for Dynamic Terrain with Application in Off-road Ground Vehicle Simulations

    Get PDF
    The dissertation develops a framework for the visualization of dynamic terrains for use in interactive real-time 3D systems. Terrain visualization techniques may be classified as either static or dynamic. Static terrain solutions simulate rigid surface types exclusively; whereas dynamic solutions can also represent non-rigid surfaces. Systems that employ a static terrain approach lack realism due to their rigid nature. Disregarding the accurate representation of terrain surface interaction is rationalized because of the inherent difficulties associated with providing runtime dynamism. Nonetheless, dynamic terrain systems are a more correct solution because they allow the terrain database to be modified at run-time for the purpose of deforming the surface. Many established techniques in terrain visualization rely on invalid assumptions and weak computational models that hinder the use of dynamic terrain. Moreover, many existing techniques do not exploit the capabilities offered by current computer hardware. In this research, we present a component framework for terrain visualization that is useful in research, entertainment, and simulation systems. In addition, we present a novel method for deforming the terrain that can be used in real-time, interactive systems. The development of a component framework unifies disparate works under a single architecture. The high-level nature of the framework makes it flexible and adaptable for developing a variety of systems, independent of the static or dynamic nature of the solution. Currently, there are only a handful of documented deformation techniques and, in particular, none make explicit use of graphics hardware. The approach developed by this research offloads extra work to the graphics processing unit; in an effort to alleviate the overhead associated with deforming the terrain. Off-road ground vehicle simulation is used as an application domain to demonstrate the practical nature of the framework and the deformation technique. In order to realistically simulate terrain surface interactivity with the vehicle, the solution balances visual fidelity and speed. Accurately depicting terrain surface interactivity in off-road ground vehicle simulations improves visual realism; thereby, increasing the significance and worth of the application. Systems in academia, government, and commercial institutes can make use of the research findings to achieve the real-time display of interactive terrain surfaces

    PatchTable: efficient patch queries for large datasets and applications

    Get PDF
    This paper presents a data structure that reduces approximate nearest neighbor query times for image patches in large datasets. Previous work in texture synthesis has demonstrated real-time synthesis from small exemplar textures. However, high performance has proved elusive for modern patch-based optimization techniques which frequently use many exemplar images in the tens of megapixels or above. Our new algorithm, PatchTable, offloads as much of the computation as possible to a pre-computation stage that takes modest time, so patch queries can be as efficient as possible. There are three key insights behind our algorithm: (1) a lookup table similar to locality sensitive hashing can be precomputed, and used to seed sufficiently good initial patch correspondences during querying, (2) missing entries in the table can be filled during pre-computation with our fast Voronoi transform, and (3) the initially seeded correspondences can be improved with a precomputed k-nearest neighbors mapping. We show experimentally that this accelerates the patch query operation by up to 9x over k-coherence, up to 12x over TreeCANN, and up to 200x over PatchMatch. Our fast algorithm allows us to explore efficient and practical imaging and computational photography applications. We show results for artistic video stylization, light field super-resolution, and multi-image inpainting
    corecore