1,489 research outputs found

    Bas-relief modeling from normal images with intuitive styles

    Get PDF
    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design basreliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs

    Analysis of Bas-Relief Generation Techniques

    Get PDF
    Simplifying the process of generating relief sculptures has been an interesting topic of research in the past decade. A relief is a type of sculpture that does not entirely extend into three-dimensional space. Instead, it has details that are carved into a flat surface, like wood or stone, such that there are slight elevations from the flat plane that define the subject of the sculpture. When viewed orthogonally straight on, a relief can look like a full sculpture or statue in the respect that a full sense of depth from the subject can be perceived. Creating such a model manually is a tedious and difficult process, akin to the challenges a painter may face when designing a convincing painting. Like with painting, certain digital tools (3D modeling programs most commonly) can make the process a little easier, but can still take a lot of time to obtain sufficient details. To further simplify the process of relief generation, a sizable amount of research has gone into developing semi-automated processes of creating reliefs based on different types of models. These methods can vary in many ways, including the type of input used, the computational time required, and the quality of the resulting model. The performance typically depends on the type of operations applied to the input model, and usually user-specified parameters to modify its appearance. In this thesis, we try to accomplish a few related topics. First, we analyze previous work in the field and briefly summarize the procedures to emphasize a variety of ways to solve the problem. We then look at specific algorithms for generating reliefs from 2D and 3D models. After explaining two of each type, a “basic” approach, and a more sophisticated one, we compare the algorithms based on their difficulty to implement, the quality of the results, and the time to process. The final section will include some more sample results of the previous algorithms, and will suggest possible ideas to enhance their results, which could be applied in continuing research on the topic

    Digital relief generation from 3D models

    Get PDF
    It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors

    Photo2Relief: Let Human in the Photograph Stand Out

    Full text link
    In this paper, we propose a technique for making humans in photographs protrude like reliefs. Unlike previous methods which mostly focus on the face and head, our method aims to generate art works that describe the whole body activity of the character. One challenge is that there is no ground-truth for supervised deep learning. We introduce a sigmoid variant function to manipulate gradients tactfully and train our neural networks by equipping with a loss function defined in gradient domain. The second challenge is that actual photographs often across different light conditions. We used image-based rendering technique to address this challenge and acquire rendering images and depth data under different lighting conditions. To make a clear division of labor in network modules, a two-scale architecture is proposed to create high-quality relief from a single photograph. Extensive experimental results on a variety of scenes show that our method is a highly effective solution for generating digital 2.5D artwork from photographs.Comment: 10 pages, 11 figure

    Real-time bas-relief generation from depth-and-normal maps on GPU

    Get PDF
    To design a bas-relief from a 3D scene is an inherently interactive task in many scenarios. The user normally needs to get instant feedback to select a proper viewpoint. However, current methods are too slow to facilitate this interaction. This paper proposes a two-scale bas-relief modeling method, which is computationally efficient and easy to produce different styles of bas-reliefs. The input 3D scene is first rendered into two textures, one recording the depth information and the other recording the normal information. The depth map is then compressed to produce a base surface with level-of-depth, and the normal map is used to extract local details with two different schemes. One scheme provides certain freedom to design bas-reliefs with different visual appearances, and the other provides a control over the level of detail. Finally, the local feature details are added into the base surface to produce the final result. Our approach allows for real-time computation due to its implementation on graphics hardware. Experiments with a wide range of 3D models and scenes show that our approach can effectively generate digital bas-reliefs in real time

    Bas-relief modelling from enriched detail and geometry with deep normal transfer

    Get PDF
    Detail-and-geometry richness is essential to bas-relief modelling. However, existing image-based and model-based bas-relief modelling techniques commonly suffer from detail monotony or geometry loss. In this paper, we introduce a new bas-relief modelling framework for detail abundance with visual attention based mask generation and geometry preservation, which benefits from our two key contributions. For detail richness, we propose a novel semantic neural network of normal transfer to enrich the texture styles on bas-reliefs. For geometry preservation, we introduce a normal decomposition scheme based on Domain Transfer Recursive Filter (DTRF). Experimental results demonstrate that our approach is advantageous on producing bas-relief modellings with both fine details and geometry preservation

    State of the Art on Stylized Fabrication

    Get PDF
    © 2018 The Authors Computer Graphics Forum © 2018 The Eurographics Association and John Wiley & Sons Ltd. Digital fabrication devices are powerful tools for creating tangible reproductions of 3D digital models. Most available printing technologies aim at producing an accurate copy of a tridimensional shape. However, fabrication technologies can also be used to create a stylistic representation of a digital shape. We refer to this class of methods as ‘stylized fabrication methods’. These methods abstract geometric and physical features of a given shape to create an unconventional representation, to produce an optical illusion or to devise a particular interaction with the fabricated model. In this state-of-the-art report, we classify and overview this broad and emerging class of approaches and also propose possible directions for future research

    High Relief from Brush Painting

    Get PDF
    Relief is an art form part way between 3D sculpture and 2D painting. We present a novel approach for generating a texture-mapped high-relief model from a single brush painting. Our aim is to extract the brushstrokes from a painting and generate the individual corresponding relief proxies rather than recovering the exact depth map from the painting, which is a tricky computer vision problem, requiring assumptions that are rarely satisfied. The relief proxies of brushstrokes are then combined together to form a 2.5D high-relief model. To extract brushstrokes from 2D paintings, we apply layer decomposition and stroke segmentation by imposing boundary constraints. The segmented brushstrokes preserve the style of the input painting. By inflation and a displacement map of each brushstroke, the features of brushstrokes are preserved by the resultant high-relief model of the painting. We demonstrate that our approach is able to produce convincing high-reliefs from a variety of paintings(with humans, animals, flowers, etc.). As a secondary application, we show how our brushstroke extraction algorithm could be used for image editing. As a result, our brushstroke extraction algorithm is specifically geared towards paintings with each brushstroke drawn very purposefully, such as Chinese paintings, Rosemailing paintings, etc
    corecore