4 research outputs found

    Polygons, points, or voxels?:stimuli selection for crowdsourcing aesthetics preferences of 3D shape pairs

    Get PDF
    Visual aesthetics is one of the fundamental perceptual properties of 3D shapes. Since the perception of shape aesthetics can be subjective, we take a data-driven approach and consider the human preferences of shape aesthetics. Previous work has considered a pairwise data collection approach, in which pairs of 3D shapes are shown to human participants and they are asked to choose one from each pair that they perceive to be more aesthetic. In this research, we study the question of whether the 3D modeling representation (e.g. polygon, points, or voxels) affects how people perceive the aesthetics of shape pairs. We find surprising results: for example the single-view and multi-view of shape pairs lead to similar user aesthetics choices; and a relatively low resolution of points or voxels is comparable to polygon meshes as they do not lead to significantly different user aesthetics choices. Our results has implications towards the data collection process of pairwise aesthetics data and the further use of such data in shape modeling problems

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Effects of Rendering on Shape Perception in Automobile Design

    No full text
    The goal of this project was to determine if advanced rendering methods such as global illumination allow more accurate discrimination of shape differences than standard rendering methods such as OpenGL. To address these questions, we conducted two psychophysical experiments to measure observers ’ sensitivity to shape differences between a physical model and rendered images of the model. Two results stand out: • The rendering method used has a significant effect on the ability to discriminate shape. In particular, under the conditions tested, global illumination rendering improves sensitivity to shape differences. • Further, viewpoint appears to have an effect on the ability to discriminate shape. In most of the cases studied, sensitivity to small shape variations was poorer when the rendering and model viewpoints were different. The results of this work have important implications for our understanding of human shape perception and for the development of rendering tools for computer-aided design
    corecore