290 research outputs found

    Context Preserving Focal Probes for Exploration of Volumetric Medical Datasets

    Get PDF
    During real-time medical data exploration using volume rendering, it is often difficult to enhance a particular region of interest without losing context information. In this paper, we present a new illustrative technique for focusing on a user-driven region of interest while preserving context information. Our focal probes define a region of interest using a distance function which controls the opacity of the voxels within the probe, exploit silhouette enhancement and use non-photorealistic shading techniques to improve shape depiction.187-19

    Illustrative interactive stipple rendering

    Get PDF
    Journal ArticleAbstract-Simulating hand-drawn illustration can succinctly express information in a manner that is communicative and informative. We present a framework for an interactive direct stipple rendering of volume and surface-based objects. By combining the principles of artistic and scientific illustration, we explore several feature enhancement techniques to create effective, interactive visualizations of scientific and medical data sets. We also introduce a rendering mechanism that generates appropriate point lists at all resolutions during an automatic preprocess and modifies rendering styles through different combinations of these feature enhancements. The new system is an effective way to interactively preview large, complex volume and surface data sets in a concise, meaningful, and illustrative manner. Stippling is effective for many applications and provides a quick and efficient method to investigate both volume and surface models

    Non-photorealistic volume rendering using stippling techniques

    Get PDF
    Journal ArticleSimulating hand-drawn illustration techniques can succinctly express information in a manner that is communicative and informative. We present a framework for an interactive direct volume illustration system that simulates traditional stipple drawing. By combining the principles of artistic and scientific illustration, we explore several feature enhancement techniques to create effective, interactive visualizations of scientific and medical datasets. We also introduce a rendering mechanism that generates appropriate point lists at all resolutions during an automatic preprocess, and modifies rendering styles through different combinations of these feature enhancements. The new system is an effective way to interactively preview large, complex volume datasets in a concise, meaningful, and illustrative manner. Volume stippling is effective for many applications and provides a quick and efficient method to investigate volume models

    MeshAdv: Adversarial Meshes for Visual Recognition

    Full text link
    Highly expressive models such as deep neural networks (DNNs) have been widely applied to various applications. However, recent studies show that DNNs are vulnerable to adversarial examples, which are carefully crafted inputs aiming to mislead the predictions. Currently, the majority of these studies have focused on perturbation added to image pixels, while such manipulation is not physically realistic. Some works have tried to overcome this limitation by attaching printable 2D patches or painting patterns onto surfaces, but can be potentially defended because 3D shape features are intact. In this paper, we propose meshAdv to generate "adversarial 3D meshes" from objects that have rich shape features but minimal textural variation. To manipulate the shape or texture of the objects, we make use of a differentiable renderer to compute accurate shading on the shape and propagate the gradient. Extensive experiments show that the generated 3D meshes are effective in attacking both classifiers and object detectors. We evaluate the attack under different viewpoints. In addition, we design a pipeline to perform black-box attack on a photorealistic renderer with unknown rendering parameters.Comment: Published in IEEE CVPR201

    Shadow Generation in Augmented Reality: A Complete Survey

    Get PDF
    This paper provides an overview of the issues and techniques involved in shadow generation in mixed reality environments. Shadow generation techniques in virtual environments are explained briefly. The key factors characterizing the well-known techniques are described in detail and the pros and cons of each technique are discussed. The conceptual perspective, the improvements, and future techniques are also investigated, sum- marized, and analysed in depth. This paper aims to provide researchers with a solid background on the state- of-the-art implementation of shadows in mixed reality. Thus, this could make it easier to choose the most appropriate method to achieve the aims. It is also hoped that this analysis will help researchers find solutions to the problems facing each technique

    Rendering of light shaft and shadow for indoor environments enhancing technique

    Get PDF
    The ray marching methods have become the most attractive method to provide realism in rendering the effects of light scattering in the participating media of numerous applications. This has attracted significant attention from the scientific community. Up-sampling of ray marching methods is suitable to evaluate light scattering effects such as volumetric shadows and light shafts for rendering realistic scenes, but suffers of cost a lot for rendering. Therefore, some encouraging outcomes have been achieved by using down-sampling of ray marching approach to accelerate rendered scenes. However, these methods are inherently prone to artifacts, aliasing and incorrect boundaries due to the reduced number of sample points along view rays. This study proposed a new enhancing technique to render light shafts and shadows taking into consideration the integration light shafts, volumetric shadows, and shadows for indoor environments. This research has three major phases that cover species of the effects addressed in this thesis. The first phase includes the soft volumetric shadows creation technique called Soft Bilateral Filtering Volumetric Shadows (SoftBiF-VS). The soft shadow was created using a new algorithm called Soft Bilateral Filtering Shadow (SBFS). This technique was started by developing an algorithm called Imperfect Multi-View Soft Shadows (IMVSSs) based on down-sampling multiple point lights (DMPLs) and multiple depth maps, which are processed by using bilateral filtering to obtain soft shadows. Then, down-sampling light scattering model was used with (SBFS) to create volumetric shadows, which was improved using cross-bilateral filter to get soft volumetric shadows. In the second phase, soft light shaft was generated using a new technique called Realistic Real-Time Soft Bilateral Filtering Light Shafts (realTiSoftLS). This technique computed the light shaft depending on down-sampling volumetric light model and depth test, and was interpolated by bilateral filtering to gain soft light shafts. Finally, an enhancing technique for integrating all of these effects that represent the third phase of this research was achieved. The performance of the new enhanced technique was evaluated quantitatively and qualitatively a measured using standard dataset. Results from the experiment showed that 63% of the participants gave strong positive responses to this technique of improving realism. From the quantitative evaluation, the results revealed that the technique has dramatically outpaced the stateof- the-art techniques with a speed of 74 fps in improving the performance for indoor environments

    Calipso: Physics-based Image and Video Editing through CAD Model Proxies

    Get PDF
    We present Calipso, an interactive method for editing images and videos in a physically-coherent manner. Our main idea is to realize physics-based manipulations by running a full physics simulation on proxy geometries given by non-rigidly aligned CAD models. Running these simulations allows us to apply new, unseen forces to move or deform selected objects, change physical parameters such as mass or elasticity, or even add entire new objects that interact with the rest of the underlying scene. In Calipso, the user makes edits directly in 3D; these edits are processed by the simulation and then transfered to the target 2D content using shape-to-image correspondences in a photo-realistic rendering process. To align the CAD models, we introduce an efficient CAD-to-image alignment procedure that jointly minimizes for rigid and non-rigid alignment while preserving the high-level structure of the input shape. Moreover, the user can choose to exploit image flow to estimate scene motion, producing coherent physical behavior with ambient dynamics. We demonstrate Calipso's physics-based editing on a wide range of examples producing myriad physical behavior while preserving geometric and visual consistency.Comment: 11 page

    Art Directed Shader for Real Time Rendering - Interactive 3D Painting

    Get PDF
    In this work, I develop an approach to include Global Illumination (GI) effects in non-photorealistic real-time rendering; real-time rendering is one of the main areas of focus in the gaming industry and the booming virtual reality(VR) and augmented reality(AR) industries. My approach is based on adapting the Barycentric shader to create a wide variety of painting effects. This shader helps achieve the look of a 2D painting in an interactively rendered 3D scene. The shader accommodates robust computation to obtain artistic reflection and refraction. My contributions can be summarized as follows: Development of a generalized Barycentric shader that can provide artistic control, integration of this generalized Barycentric shader into an interactive ray tracer, and interactive rendering of a 3D scene that closely represent the reference painting
    corecore