921 research outputs found
Utilizing a 3D game engine to develop a virtual design review system
A design review process is where information is exchanged between the designers and design reviewers to resolve any potential design related issues, and to ensure that the interests and goals of the owner are met. The effective execution of design review will minimize potential errors or conflicts, reduce the time for review, shorten the project life-cycle, allow for earlier occupancy, and ultimately translate into significant total project savings to the owner. However, the current methods of design review are still heavily relying on 2D paper-based format, sequential and lack central and integrated information base for efficient exchange and flow of information. There is thus a need for the use of a new medium that allow for 3D visualization of designs, collaboration among designers and design reviewers, and early and easy access to design review information. This paper documents the innovative utilization of a 3D game engine, the Torque Game Engine as the underlying tool and enabling technology for a design review system, the Virtual Design Review System for architectural designs. Two major elements are incorporated; 1) a 3D game engine as the driving tool for the development and implementation of design review processes, and 2) a virtual environment as the medium for design review, where visualization of design and design review information is based on sound principles of GUI design. The development of the VDRS involves two major phases; firstly, the creation of the assets and the assembly of the virtual environment, and secondly, the modification of existing functions or introducing new functionality through programming of the 3D game engine in order to support design review in a virtual environment. The features that are included in the VDRS are support for database, real-time collaboration across network, viewing and navigation modes, 3D object manipulation, parametric input, GUI, and organization for 3D objects
Compression, Modeling, and Real-Time Rendering of Realistic Materials and Objects
The realism of a scene basically depends on the quality of the geometry, the
illumination and the materials that are used. Whereas many sources for
the creation of three-dimensional geometry exist and numerous algorithms
for the approximation of global illumination were presented, the acquisition
and rendering of realistic materials remains a challenging problem.
Realistic materials are very important in computer graphics, because
they describe the reflectance properties of surfaces, which are based on the
interaction of light and matter. In the real world, an enormous diversity of
materials can be found, comprising very different properties. One important
objective in computer graphics is to understand these processes, to formalize
them and to finally simulate them.
For this purpose various analytical models do already exist, but their
parameterization remains difficult as the number of parameters is usually
very high. Also, they fail for very complex materials that occur in the real
world. Measured materials, on the other hand, are prone to long acquisition
time and to huge input data size. Although very efficient statistical
compression algorithms were presented, most of them do not allow for editability,
such as altering the diffuse color or mesostructure. In this thesis,
a material representation is introduced that makes it possible to edit these
features. This makes it possible to re-use the acquisition results in order to
easily and quickly create deviations of the original material. These deviations
may be subtle, but also substantial, allowing for a wide spectrum of
material appearances.
The approach presented in this thesis is not based on compression, but on
a decomposition of the surface into several materials with different reflection
properties. Based on a microfacette model, the light-matter interaction is
represented by a function that can be stored in an ordinary two-dimensional
texture. Additionally, depth information, local rotations, and the diffuse
color are stored in these textures. As a result of the decomposition, some
of the original information is inevitably lost, therefore an algorithm for the
efficient simulation of subsurface scattering is presented as well.
Another contribution of this work is a novel perception-based simplification
metric that includes the material of an object. This metric comprises
features of the human visual system, for example trichromatic color
perception or reduced resolution. The proposed metric allows for a more
aggressive simplification in regions where geometric metrics do not simplif
Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes
We present a multi-view inverse rendering method for large-scale real-world
indoor scenes that reconstructs global illumination and physically-reasonable
SVBRDFs. Unlike previous representations, where the global illumination of
large scenes is simplified as multiple environment maps, we propose a compact
representation called Texture-based Lighting (TBL). It consists of 3D meshs and
HDR textures, and efficiently models direct and infinite-bounce indirect
lighting of the entire large scene. Based on TBL, we further propose a hybrid
lighting representation with precomputed irradiance, which significantly
improves the efficiency and alleviate the rendering noise in the material
optimization. To physically disentangle the ambiguity between materials, we
propose a three-stage material optimization strategy based on the priors of
semantic segmentation and room segmentation. Extensive experiments show that
the proposed method outperforms the state-of-the-arts quantitatively and
qualitatively, and enables physically-reasonable mixed-reality applications
such as material editing, editable novel view synthesis and relighting. The
project page is at https://lzleejean.github.io/TexIR.Comment: The project page is at: https://lzleejean.github.io/TexI
Application of 3ds Max for 3D Modelling and Rendering
In this article, the application of 3ds Max for 3D modelling and rendering of a car model is described. The process of creating a 3D car model is explained as well as setting up the references, working with editable poly, details in car interior, using turbosmooth and symmetry modifier. The manner which materials are applied to the model are described as well as lighting the scene and\ud
setting up the render. The rendering methods and techniques are described, too. Final render results from several rendering plugins, such as V-ray, Mental Ray, Iray, Scanline, Maxwell, Corona, Octane and LuxRender are presented and compared
Dynamic Voxel Based Terrain Generation
This project is an implementation of an editable terrain system. By maintaining an octree of volumetric data and performing the mesh creation on the GPU, the program can allow for free editing of the surroundings which is then reflected in real time. This allows for real time applications to have terrain that can change depending on how the user interacts with it
A Precomputed Polynomial Representation for Interactive BRDF Editing with Global Illumination
The ability to interactively edit BRDFs in their final placement within a computer graphics scene is vital to making informed choices for material properties. We significantly extend previous work on BRDF editing for static scenes (with fixed lighting and view), by developing a precomputed polynomial representation that enables interactive BRDF editing with global illumination. Unlike previous recomputation based rendering techniques, the image is not linear in the BRDF when considering interreflections. We introduce a framework for precomputing a multi-bounce tensor of polynomial coefficients, that encapsulates the nonlinear nature of the task. Significant reductions in complexity are achieved by leveraging the low-frequency nature of indirect light. We use a high-quality representation for the BRDFs at the first bounce from the eye, and lower-frequency (often diffuse) versions for further bounces. This approximation correctly captures the general global illumination in a scene, including color-bleeding, near-field object reflections, and even caustics. We adapt Monte Carlo path tracing for precomputing the tensor of coefficients for BRDF basis functions. At runtime, the high-dimensional tensors can be reduced to a simple dot product at each pixel for rendering. We present a number of examples of editing BRDFs in complex scenes, with interactive feedback rendered with global illumination
Unsupervised three-dimensional reconstruction of small rocks from a single two-dimensional image
Surfaces covered with pebbles and small rocks can often be found in nature or in human shaped environments.
Generating an accurate three-dimensional model of those kind of surfaces from a reference image can be challenging,
especially if one wants to be able to animate each pebble individually. To undertake this kind of job manually
is time consuming and impossible to achieve in dynamic terrains animations.
The method described in this paper allows unsupervised automatic generation of three-dimensional textured rocks
from a two-dimensional image aiming to closely match the original image as much as possible
Cyclical Flow: Spatial Synthesis Sound Toy as Multichannel Composition Tool
This paper outlines and discusses an interactive system designed as a playful âsound toyâ for spatial composition. Proposed models of composition and design in this context are discussed. The design, functionality and application of the software system is then outlined and summarised. The paper concludes with observations from use, and discussion of future developments
Character customization: Animated hair and clothing
Treball final de Grau en Disseny i Desenvolupament de Videojocs. Codi: VJ1241. Curs acadĂšmic: 2018/2019This project consists in designing and implementing a 3D female character editor.
It is focused in modeling and animating the female character, hairstyle and clothes. This
editor will be developed using the Unity 3D Game Engine. It will consist in an interface
that allows changing skin and eye color, style and color of hair and, lastly, the clothes the
character is to wear among a catalogue of predefined models. With each change, the
character will respond with an animation in order to improve the experience of perceiving
the final style of the character
- âŠ