16 research outputs found

    Perimeter detection in sketched drawings of polyhedral shapes

    Get PDF
    Ponència presentada al STAG17: Smart tools and Applications in Graphics, celebrat a Catania (Itàlia) 11-12 setembre 2017This paper describes a new “envelope” approach for detecting object perimeters in line-drawings vectorised from sketches of polyhedral objects. Existing approaches for extracting contours from digital images are unsuitable for Sketch-Based Modelling, as they calculate where the contour is, but not which elements of the line-drawing belong to it. In our approach, the perimeter is described in terms of lines and junctions (including intersections and T-junctions) of the original line drawing

    Interactive non-photorealistic rendering

    Get PDF
    Due to increasing demands of artistic style with Interactive Rate, we propose this review paper as a starting point for any person interested in researching of interactive non-photorealistic rendering. As a simple yet effective means of visual communication, interactive non-photorealistic rendering generates images that are closer to human-drawn than are created by traditional computer graphics techniques with more expressing meaningful visual information. This paper presents taxonomy of interactive non-photorealistic rendering techniques which developed over the past two decades, structured according to the design characteristics and behavior of each technique. Also, it covers the most important algorithms in interactive stylized shade and line drawing, and separately discussing their advantages and disadvantages. The review then concludes with a discussion of the main issues and technical challenges for Interactive Non-Photorealistic Rendering techniques. In addition, this paper discusses the effect of modified phong shading model in order to create toon shading appearance

    Efficient shadow algorithms on graphics hardware

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.Includes bibliographical references (p. 85-92).Shadows are important to computer graphics because they add realism and help the viewer identify spatial relationships. Shadows are also useful story-telling devices. For instance, artists carefully choose the shape, softness, and placement of shadows to establish mood or character. Many shadow generation techniques developed over the years have been used successfully in offline movie production. It is still challenging, however, to compute high-quality shadows in real-time for dynamic scenes. This thesis presents two efficient shadow algorithms. Although these algorithms are designed to run in real-time on graphics hardware, they are also well-suited to offline rendering systems. First, we describe a hybrid algorithm for rendering hard shadows accurately and efficiently. Our method combines the strengths of two existing techniques, shadow maps and shadow volumes. We first use a shadow map to identify the pixels in the image that lie near shadow discontinuities. Then, we perform the shadow-volume computation only at these pixels to ensure accurate shadow edges. This approach simultaneously avoids the edge aliasing artifacts of standard shadow maps and avoids the high fillrate consumption of standard shadow volumes. The algorithm relies on a hardware mechanism that we call a computation mask for rapidly rejecting non-silhouette pixels during rasterization. Second, we present a method for the real-time rendering of soft shadows. Our approach builds on the shadow map algorithm by attaching geometric primitives that we call smoothies to the objects' silhouettes. The smoothies give rise to fake shadows that appear qualitatively like soft shadows, without the cost of densely sampling an area light source.(cont.) In particular, the softness of the shadow edges depends on the ratio of distances between the light source, the blockers, and the receivers. The soft shadow edges hide objectionable aliasing artifacts that are noticeable with ordinary shadow maps. Our algorithm computes shadows efficiently in image space and maps well to programmable graphics hardware.by Eric Chan.S.M

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Méthodes de rendu à base de vidéos et applications à la réalité Virtuelle

    Get PDF
    Given a set images of the same scene, the goal of video-based rendering methods is to compute new views of this scene from new viewpoints. The user of this system controls the virtual camera's movement through the scene. Nevertheless, the virtual images are computed from static cameras. A first approach is based on a reconstruction of the scene and can provide accurate models but often requires lengthy computation before visualization. Other methods try to achieve real-time rendering. Our main contribution to video-base rendering concerns the plane sweep method which belongs to the latter family. The plane sweep method divides space in parallel planes. Each point of each plane is processed independently in order to know if it lies on the surface of an object of the scene. These informations are used to compute a new view of the scene from a new viewpoint. This method is well suited to an implementation using graphic hardware and thus to reach realtime rendering. Our main contribution to this method concerns the way to consider whether a point of a plane lies on the surface of an object of the scene. We first propose a new scoring method increasing the visual quality of the new images. Compared with previous approaches, this method implies fewer constraints on the position of the virtaul camera, i.e. this camera does not need to lie between the input camera's area. We also present an adaptation of the plane sweep algorithm that handles partial occlusions. According to video-based rendering practical applications in virtual reality, we propose an improvement of the plane sweep method dealing with stereoscopic images computation that provides visualization of the virtual scene in relief. Our enhancement provides the second view with only low additional computation time whereas most of the others techniques require to render the scene twice. This improvement is based on a sharing of the informations common to the two stereoscopic views. Finally, we propose a method that removes pseudoscopic movements in a virtual reality application. These pseudoscopic movements appear when the observer moves in front of the stereoscopic screen. Then the scene roportions seem to be distorted and the observer sees the objects of the scene moving in an anormal way. The method we propose is available either on a classical stereoscopic rendering method or on the Plane Seep algorithm. Every method we propose widely uses graphic harware through to shader programs and provides real-time rendering. These methods only require a standard computer, a video acquisition device and a powerful enough graphic card. There exists a lot of practicalapplications of the plane sweep method, especially in fields like virtual reality, video games, 3d television or security.Etant donné un ensemble de caméras filmant une même scène, le rendu à base de vidéos consiste à générer de nouvelles images de cette scène à partir de nouveaux points de vue. L'utilisateur a ainsi l'impression de pouvoir déplacer une caméra virtuelle dans la scène alors qu'en réalité, toutes les caméras sont fixes. Certaines méthodes de rendu à base de vidéos coûteuses en temps de calcul se basent sur une reconstruction 3d de la scène et produisent des images de très bonne qualité. D'autres méthodes s'orientent plutôt vers le rendu temps réel. C'est dans cette dernière catégorie que s'inscrit la méthode de Plane Sweep sur laquelle porte la majeure partie de nos travaux. Le principe de la méthode des Plane Sweep consiste à discrétiser la scène en plans parallèles et à traiter séparément chaque point de ces plans afin de déterminer s'ils se trouvent ou non sur la surface d'un objet de la scène. Les résultats obtenus permettent de générer une nouvelle image de la scène à partir d'un nouveau point de vue. Cette méthode est particulièrement bien adaptée à une utilisation optimale des ressources de la carte graphique ce qui explique qu'elle permette d'effectuer du rendu en temps réel. Notre principale contribution à cette méthode concerne la façon d'estimer si un point d'un plan représente la surface d'un objet. Nous proposons d'une part un nouveau mode de calcul permettant d'améliorer le résultat visuel tout en rendant la navigation de la caméra virtuelle plus souple. D'autre part, nous présentons une adaptation de la méthode des Plane Sweep permettant de gérer les occlusions partielles. Compte tenu des applications du rendu à base de vidéos en réalité virtuelle, nous proposons une amélioration des Plane Sweep appliquée à la réalité virtuelle avec notamment la création de paires d'images stéréoscopiques permettant de visualiser en relief la scène reconstruite. Notre amélioration consiste à calculer la seconde vue à moindre coût alors qu'une majorité des méthodes concurrentes sont contraintes d'effectuer deux rendus indépendants. Cette amélioration est basée sur un partage des données communes aux deux vues stéréoscopiques. Enfin, dans le cadre de l'utilisation des Plane Sweep en réalité virtuelle, nous présentons une méthode permettant de supprimer les mouvements pseudoscopiques. Ces mouvements pseudoscopiques apparaissent lorsque l'observateur se déplace devant une image stéréoscopique, il ressent alors une distorsion des proportions de la scène virtuelle et voit les objets se déplacer de façon anormale. La méthode de correction que nous proposons est applicable d'une part à des méthodes classiques de rendu d'images de synthèse et d'autre part à la méthode des Plane Sweep. Toutes les méthodes que nous présentons utilisent largement les possibilités du processeur de la carte graphique à l'aide des shader programs et génèrent toutes des images en temps réel. Seuls un ordinateur grand public, un dispositif d'acquisition vidéo et une bonne carte graphique sont suffisants pour les faire fonctionner. Les applications des Plane Sweep sont nombreuses, en particulier dans les domaines de la réalité virtuelle, du jeu vidéo, de la télévision 3d ou de la sécurité

    Parametric BIM-based Design Review

    Get PDF
    This research addressed the need for a new design review technology and method to express the tangible and intangible qualities of architectural experience of parametric BIM-based design projects. The research produced an innovative presentation tool by which parametric design is presented systematically. Focus groups provided assessments of the tool to reveal the usefulness of a parametric BIM-based design review method. The way in which we visualize architecture affects the way we design and perceive architectural form and performance. Contemporary architectural forms and systems are very complex, yet most architects who use Building Information Modeling (BIM) and generative design methods still embrace the two-dimensional 15th-century Albertian representational methods to express and review design projects. However, architecture cannot be fully perceived through a set of drawings that mediate our perception and evaluation of the built environment. The systematic and conventional approach of traditional architectural representation, in paper-based and slide-based design reviews, is not able to visualize phenomenal experience nor the inherent variation and versioning of parametric models. Pre-recorded walk-throughs with high quality rendering and imaging have been in use for decades, but high verisimilitude interactive walk-throughs are not commonly used in architectural presentations. The new generations of parametric and BIM systems allow for the quick production of variations in design by varying design parameters and their relationships. However, there is a lack of tools capable of conducting design reviews that engage the advantages of parametric and BIM design projects. Given the multitude of possibilities of in-game interface design, game-engines provide an opportunity for the creation of an interactive, parametric, and performance-oriented experience of architectural projects with multi-design options. This research has produced a concept for a dynamic presentation and review tool and method intended to meet the needs of parametric design, performance-based evaluation, and optimization of multi-objective design options. The concept is illustrated and tested using a prototype (Parametric Design Review, or PDR) based upon an interactive gaming environment equipped with a novel user interface that simultaneously engages the parametric framework, object parameters, multi-objective optimized design options and their performances with diagrammatic, perspectival, and orthographic representations. The prototype was presented to representative users in multiple focus group sessions. Focus group discussion data reveal that the proposed PDR interface was perceived to be useful if used for design reviews in both academic and professional practice settings

    Implementing non-photorealistic rendreing enhancements with real-time performance

    Get PDF
    We describe quality and performance enhancements, which work in real-time, to all well-known Non-photorealistic (NPR) rendering styles for use in an interactive context. These include Comic rendering, Sketch rendering, Hatching and Painterly rendering, but we also attempt and justify a widening of the established definition of what is considered NPR. In the individual Chapters, we identify typical stylistic elements of the different NPR styles. We list problems that need to be solved in order to implement the various renderers. Standard solutions available in the literature are introduced and in all cases extended and optimised. In particular, we extend the lighting model of the comic renderer to include a specular component and introduce multiple inter-related but independent geometric approximations which greatly improve rendering performance. We implement two completely different solutions to random perturbation sketching, solve temporal coherence issues for coal sketching and find an unexpected use for 3D textures to implement hatch-shading. Textured brushes of painterly rendering are extended by properties such as stroke-direction and texture, motion, paint capacity, opacity and emission, making them more flexible and versatile. Brushes are also provided with a minimal amount of intelligence, so that they can help in maximising screen coverage of brushes. We furthermore devise a completely new NPR style, which we call super-realistic and show how sample images can be tweened in real-time to produce an image-based six degree-of-freedom renderer performing at roughly 450 frames per second. Performance values for our other renderers all lie between 10 and over 400 frames per second on homePC hardware, justifying our real-time claim. A large number of sample screen-shots, illustrations and animations demonstrate the visual fidelity of our rendered images. In essence, we successfully achieve our attempted goals of increasing the creative, expressive and communicative potential of individual NPR styles, increasing performance of most of them, adding original and interesting visual qualities, and exploring new techniques or existing ones in novel ways.KMBT_363Adobe Acrobat 9.54 Paper Capture Plug-i

    Individual and group dynamic behaviour patterns in bound spaces

    Get PDF
    The behaviour analysis of individual and group dynamics in closed spaces is a subject of extensive research in both academia and industry. However, despite recent technological advancements the problem of implementing the existing methods for visual behaviour data analysis in production systems remains difficult and the applications are available only in special cases in which the resourcing is not a problem. Most of the approaches concentrate on direct extraction and classification of the visual features from the video footage for recognising the dynamic behaviour directly from the source. The adoption of such an approach allows recognising directly the elementary actions of moving objects, which is a difficult task on its own. The major factor that impacts the performance of the methods for video analytics is the necessity to combine processing of enormous volume of video data with complex analysis of this data using and computationally resourcedemanding analytical algorithms. This is not feasible for many applications, which must work in real time. In this research, an alternative simulation-based approach for behaviour analysis has been adopted. It can potentially reduce the requirements for extracting information from real video footage for the purpose of the analysis of the dynamic behaviour. This can be achieved by combining only limited data extracted from the original video footage with a symbolic data about the events registered on the scene, which is generated by 3D simulation synchronized with the original footage. Additionally, through incorporating some physical laws and the logics of dynamic behaviour directly in the 3D model of the visual scene, this framework allows to capture the behavioural patterns using simple syntactic pattern recognition methods. The extensive experiments with the prototype implementation prove in a convincing manner that the 3D simulation generates sufficiently rich data to allow analysing the dynamic behaviour in real-time with sufficient adequacy without the need to use precise physical data, using only a limited data about the objects on the scene, their location and dynamic characteristics. This research can have a wide applicability in different areas where the video analytics is necessary, ranging from public safety and video surveillance to marketing research to computer games and animation. Its limitations are linked to the dependence on some preliminary processing of the video footage which is still less detailed and computationally demanding than the methods which use directly the video frames of the original footage

    A Qualification of 3D Geovisualisation

    Get PDF
    corecore