991 research outputs found

    Hybrid visibility compositing and masking for illustrative rendering

    Get PDF
    In this paper, we introduce a novel framework for the compositing of interactively rendered 3D layers tailored to the needs of scientific illustration. Currently, traditional scientific illustrations are produced in a series of composition stages, combining different pictorial elements using 2D digital layering. Our approach extends the layer metaphor into 3D without giving up the advantages of 2D methods. The new compositing approach allows for effects such as selective transparency, occlusion overrides, and soft depth buffering. Furthermore, we show how common manipulation techniques such as masking can be integrated into this concept. These tools behave just like in 2D, but their influence extends beyond a single viewpoint. Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others. Our implementation exploits current graphics hardware and permits real-time interaction and rendering.publishedVersio

    Plausible Shading Decomposition For Layered Photo Retouching

    Get PDF
    Photographers routinely compose multiple manipulated photos of the same scene (layers) into a single image, which is better than any individual photo could be alone. Similarly, 3D artists set up rendering systems to produce layered images to contain only individual aspects of the light transport, which are composed into the final result in post-production. Regrettably, both approaches either take considerable time to capture, or remain limited to synthetic scenes. In this paper, we suggest a system to allow decomposing a single image into a plausible shading decomposition (PSD) that approximates effects such as shadow, diffuse illumination, albedo, and specular shading. This decomposition can then be manipulated in any off-the-shelf image manipulation software and recomposited back. We perform such a decomposition by learning a convolutional neural network trained using synthetic data. We demonstrate the effectiveness of our decomposition on synthetic (i.e., rendered) and real data (i.e., photographs), and use them for common photo manipulation, which are nearly impossible to perform otherwise from single images

    Digital imaging techniques for recording and analysing prehistoric rock art panels in Galicia (NW Iberia)

    Get PDF
    Several works have highlighted the relevance of 3D modelling techniques for the study of rock art, especially in case of deteriorated state of preservation. This paper presents a methodological approach to accurate document two Bronze Age rock art panels in Galicia (Spain), using photogrammetry SfM. The main aim is to show the application of digital enhancement techniques which have allowed the accurate depiction of the motifs and the correction of previous calques, focusing on the application of the exaggerated shading as a novel analytical method

    Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data

    Get PDF
    The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering

    Investigation and Validation of Imaging Techniques for Mitral Valve Disease Diagnosis and Intervention

    Get PDF
    Mitral Valve Disease (MVD) describes a variety of pathologies that result in regurgitation of blood during the systolic phase of the cardiac cycle. Decisions in valvular disease management rely heavily on non-invasive imaging. Transesophageal echocardiography (TEE) is widely recognized as the key evaluation technique where backflow of high velocity blood can be visualized under Doppler. In most cases, TEE imaging is adequate for identifying mitral valve pathology, though the modality is often limited from signal dropout, artifacts and a restricted field of view. Quantitative analysis is an integral part of the overall assessment of valve morphology and gives objective evidence for both classification and guiding intervention of regurgitation. In addition, patient-specific models derived from diagnostic TEE images allow clinicians to gain insight into uniquely intricate anatomy prior to surgery. However, the heavy reliance on TEE segmentation for diagnosis and modelling has necessitated an evaluation of the accuracy of the oft-used mitral valve imaging modality. Dynamic cardiac 4D-Computed Tomography (4D-CT) is emerging as a valuable tool for diagnosis, quantification and assessment of cardiac diseases. This modality has the potential to provide a high quality rendering of the mitral valve and subvalvular apparatus, to provide a more complete picture of the underlying morphology. However, application of dynamic CT to mitral valve imaging is especially challenging due to the large and rapid motion of the valve leaflets. It is therefore necessary to investigate the accuracy and level of precision by which dynamic CT captures mitral valve motion throughout the cardiac cycle. To do this, we design and construct a silicone and bovine quasi-static mitral valve phantom which can simulate a range of ECG-gated heart rates and reproduce physiologic valve motion over the cardiac cycle. In this study, we discovered that the dynamic CT accurately captures the underlying valve movement, but with a higher prevalence of image artifacts as leaflet and chordae motion increases due to elevated heart rates. In a subsequent study, we acquire simultaneous CT and TEE images of both a silicone mitral valve phantom and an iodine-stained bovine mitral valve. We propose a pipeline to use CT as the ground truth to study the relationship between TEE intensities and the underlying valve morphology. Preliminary results demonstrate that with an optimized threshold selection based solely on TEE pixel intensities, only 40\% of pixels are correctly classified as part of the valve. In addition, we have shown that emphasizing the centre-line rather than the boundaries of high intensity TEE image regions provides a better representation and segmentation of the valve morphology. This work has the potential to inform and augment the use of TEE for diagnosis and modelling of the mitral valve in the clinical workflow for MVD

    A Data-Driven Appearance Model for Human Fatigue

    Get PDF
    Humans become visibly tired during physical activity. After a set of squats, jumping jacks or walking up a flight of stairs, individuals start to pant, sweat, loose their balance, and flush. Simulating these physiological changes due to exertion and exhaustion on an animated character greatly enhances a motion’s realism. These fatigue factors depend on the mechanical, physical, and biochemical function states of the human body. The difficulty of simulating fatigue for character animation is due in part to the complex anatomy of the human body. We present a multi-modal capturing technique for acquiring synchronized biosignal data and motion capture data to enhance character animation. The fatigue model utilizes an anatomically derived model of the human body that includes a torso, organs, face, and rigged body. This model is then driven by biosignal output. Our animations show the wide range of exhaustion behaviors synthesized from real biological data output. We demonstrate the fatigue model by augmenting standard motion capture with exhaustion effects to produce more realistic appearance changes during three exercise examples. We compare the fatigue model with both simple procedural methods and a dense marker set data capture of exercise motions

    Multispectral RTI Analysis of Heterogeneous Artworks

    Get PDF
    We propose a novel multi-spectral reflectance transformation imaging (MS-RTI) framework for the acquisition and direct analysis of the reflectance behavior of heterogeneous artworks. Starting from free-form acquisitions, we compute per-pixel calibrated multi-spectral appearance profiles, which associate a reflectance value to each sampled light direction and frequency. Visualization, relighting, and feature extraction is performed directly on appearance profile data, applying scattered data interpolation based on Radial Basis Functions to estimate per-pixel reflectance from novel lighting directions. We demonstrate how the proposed solution can convey more insights on the object materials and geometric details compared to classical multi-light methods that rely on low-frequency analytical model fitting eventually mixed with a separate handling of high-frequency components, hence requiring constraining priors on material behavior. The flexibility of our approach is illustrated on two heterogeneous case studies, a painting and a dark shiny metallic sculpture, that showcase feature extraction, visualization, and analysis of high-frequency properties of artworks using multi-light, multi-spectral (Visible, UV and IR) acquisitions.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091the DSURF (PRIN 2015) project funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa
    • 

    corecore