898 research outputs found

    A framework for digital sunken relief generation based on 3D geometric models

    Get PDF
    Sunken relief is a special art form of sculpture whereby the depicted shapes are sunk into a given surface. This is traditionally created by laboriously carving materials such as stone. Sunken reliefs often utilize the engraved lines or strokes to strengthen the impressions of a 3D presence and to highlight the features which otherwise are unrevealed. In other types of reliefs, smooth surfaces and their shadows convey such information in a coherent manner. Existing methods for relief generation are focused on forming a smooth surface with a shallow depth which provides the presence of 3D figures. Such methods unfortunately do not help the art form of sunken reliefs as they omit the presence of feature lines. We propose a framework to produce sunken reliefs from a known 3D geometry, which transforms the 3D objects into three layers of input to incorporate the contour lines seamlessly with the smooth surfaces. The three input layers take the advantages of the geometric information and the visual cues to assist the relief generation. This framework alters existing techniques in line drawings and relief generation, and then combines them organically for this particular purpose

    Photo2Relief: Let Human in the Photograph Stand Out

    Full text link
    In this paper, we propose a technique for making humans in photographs protrude like reliefs. Unlike previous methods which mostly focus on the face and head, our method aims to generate art works that describe the whole body activity of the character. One challenge is that there is no ground-truth for supervised deep learning. We introduce a sigmoid variant function to manipulate gradients tactfully and train our neural networks by equipping with a loss function defined in gradient domain. The second challenge is that actual photographs often across different light conditions. We used image-based rendering technique to address this challenge and acquire rendering images and depth data under different lighting conditions. To make a clear division of labor in network modules, a two-scale architecture is proposed to create high-quality relief from a single photograph. Extensive experimental results on a variety of scenes show that our method is a highly effective solution for generating digital 2.5D artwork from photographs.Comment: 10 pages, 11 figure

    Computer Assisted Relief Generation - a Survey

    Get PDF
    In this paper we present an overview of the achievements accomplished to date in the field of computer aided relief generation. We delineate the problem, classify the different solutions, analyze similarities, investigate the evelopment and review the approaches according to their particular relative strengths and weaknesses. In consequence this survey is likewise addressed to researchers and artists through providing valuable insights into the theory behind the different concepts in this field and augmenting the options available among the methods presented with regard to practical application

    Making bas-reliefs from photographs of human faces

    Get PDF
    Bas-reliefs are a form of flattened artwork, part-way between 3D sculpture and 2D painting. Recent research has considered automatic bas-relief generation from 3D scenes. However, little work has addressed the generation of bas-reliefs from 2D images. In this paper, we propose a method to automatically generate bas-relief surfaces from frontal photographs of human faces, with potential applications to e.g. coinage and commemorative medals. Our method has two steps. Starting from a photograph of a human face, we first generate a plausible image of a bas-relief of the same face. Secondly, we apply shape-from-shading to this generated bas-relief image to determine the 3D shape of the final bas-relief. To model the mapping from an input photograph to the image of a corresponding bas-relief, we use a feedforward network. The training data comprises images generated from an input 3D model of a face, and images generated from a corresponding bas-relief; the latter is produced by an existing 3D model-to-bas-relief algorithm. A saliency map of the face controls both model building, and bas-relief generation. Our experimental results demonstrate that the generated bas-relief surfaces are smooth and plausible, with correct global geometric nature, the latter giving them a stable appearance under changes of viewing direction and illumination

    Analysis of Bas-Relief Generation Techniques

    Get PDF
    Simplifying the process of generating relief sculptures has been an interesting topic of research in the past decade. A relief is a type of sculpture that does not entirely extend into three-dimensional space. Instead, it has details that are carved into a flat surface, like wood or stone, such that there are slight elevations from the flat plane that define the subject of the sculpture. When viewed orthogonally straight on, a relief can look like a full sculpture or statue in the respect that a full sense of depth from the subject can be perceived. Creating such a model manually is a tedious and difficult process, akin to the challenges a painter may face when designing a convincing painting. Like with painting, certain digital tools (3D modeling programs most commonly) can make the process a little easier, but can still take a lot of time to obtain sufficient details. To further simplify the process of relief generation, a sizable amount of research has gone into developing semi-automated processes of creating reliefs based on different types of models. These methods can vary in many ways, including the type of input used, the computational time required, and the quality of the resulting model. The performance typically depends on the type of operations applied to the input model, and usually user-specified parameters to modify its appearance. In this thesis, we try to accomplish a few related topics. First, we analyze previous work in the field and briefly summarize the procedures to emphasize a variety of ways to solve the problem. We then look at specific algorithms for generating reliefs from 2D and 3D models. After explaining two of each type, a “basic” approach, and a more sophisticated one, we compare the algorithms based on their difficulty to implement, the quality of the results, and the time to process. The final section will include some more sample results of the previous algorithms, and will suggest possible ideas to enhance their results, which could be applied in continuing research on the topic

    Real-time bas-relief generation from depth-and-normal maps on GPU

    Get PDF
    To design a bas-relief from a 3D scene is an inherently interactive task in many scenarios. The user normally needs to get instant feedback to select a proper viewpoint. However, current methods are too slow to facilitate this interaction. This paper proposes a two-scale bas-relief modeling method, which is computationally efficient and easy to produce different styles of bas-reliefs. The input 3D scene is first rendered into two textures, one recording the depth information and the other recording the normal information. The depth map is then compressed to produce a base surface with level-of-depth, and the normal map is used to extract local details with two different schemes. One scheme provides certain freedom to design bas-reliefs with different visual appearances, and the other provides a control over the level of detail. Finally, the local feature details are added into the base surface to produce the final result. Our approach allows for real-time computation due to its implementation on graphics hardware. Experiments with a wide range of 3D models and scenes show that our approach can effectively generate digital bas-reliefs in real time

    Bas-relief modelling from enriched detail and geometry with deep normal transfer

    Get PDF
    Detail-and-geometry richness is essential to bas-relief modelling. However, existing image-based and model-based bas-relief modelling techniques commonly suffer from detail monotony or geometry loss. In this paper, we introduce a new bas-relief modelling framework for detail abundance with visual attention based mask generation and geometry preservation, which benefits from our two key contributions. For detail richness, we propose a novel semantic neural network of normal transfer to enrich the texture styles on bas-reliefs. For geometry preservation, we introduce a normal decomposition scheme based on Domain Transfer Recursive Filter (DTRF). Experimental results demonstrate that our approach is advantageous on producing bas-relief modellings with both fine details and geometry preservation
    corecore