6,846 research outputs found

    VolumeEVM: A new surface/volume integrated model

    Get PDF
    Volume visualization is a very active research area in the field of scien-tific visualization. The Extreme Vertices Model (EVM) has proven to be a complete intermediate model to visualize and manipulate volume data using a surface rendering approach. However, the ability to integrate the advantages of surface rendering approach with the superiority in visual exploration of the volume rendering would actually produce a very complete visualization and edition system for volume data. Therefore, we decided to define an enhanced EVM-based model which incorporates the volumetric information required to achieved a nearly direct volume visualization technique. Thus, VolumeEVM was designed maintaining the same EVM-based data structure plus a sorted list of density values corresponding to the EVM-based VoIs interior voxels. A function which relates interior voxels of the EVM with the set of densities was mandatory to be defined. This report presents the definition of this new surface/volume integrated model based on the well known EVM encoding and propose implementations of the main software-based direct volume rendering techniques through the proposed model.Postprint (published version

    10411 Abstracts Collection -- Computational Video

    Get PDF
    From 10.10.2010 to 15.10.2010, the Dagstuhl Seminar 10411 ``Computational Video \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Ambient point clouds for view interpolation

    Get PDF

    Model for volume lighting and modeling

    Get PDF
    Journal ArticleAbstract-Direct volume rendering is a commonly used technique in visualization applications. Many of these applications require sophisticated shading models to capture subtle lighting effects and characteristics of volumetric data and materials. For many volumes, homogeneous regions pose problems for typical gradient-based surface shading. Many common objects and natural phenomena exhibit visual quality that cannot be captured using simple lighting models or cannot be solved at interactive rates using more sophisticated methods. We present a simple yet effective interactive shading model which captures volumetric light attenuation effects that incorporates volumetric shadows, an approximation to phase functions, an approximation to forward scattering, and chromatic attenuation that provides the subtle appearance of translucency. We also present a technique for volume displacement or perturbation that allows realistic interactive modeling of high frequency detail for both real and synthetic volumetric data

    Management and display of four-dimensional environmental data sets using McIDAS

    Get PDF
    Over the past four years, great strides have been made in the areas of data management and display of 4-D meteorological data sets. A survey was conducted of available and planned 4-D meteorological data sources. The data types were evaluated for their impact on the data management and display system. The requirements were analyzed for data base management generated by the 4-D data display system. The suitability of the existing data base management procedures and file structure were evaluated in light of the new requirements. Where needed, new data base management tools and file procedures were designed and implemented. The quality of the basic 4-D data sets was assured. The interpolation and extrapolation techniques of the 4-D data were investigated. The 4-D data from various sources were combined to make a uniform and consistent data set for display purposes. Data display software was designed to create abstract line graphic 3-D displays. Realistic shaded 3-D displays were created. Animation routines for these displays were developed in order to produce a dynamic 4-D presentation. A prototype dynamic color stereo workstation was implemented. A computer functional design specification was produced based on interactive studies and user feedback

    Exploiting Sparsity in Automotive Radar Object Detection Networks

    Full text link
    Having precise perception of the environment is crucial for ensuring the secure and reliable functioning of autonomous driving systems. Radar object detection networks are one fundamental part of such systems. CNN-based object detectors showed good performance in this context, but they require large compute resources. This paper investigates sparse convolutional object detection networks, which combine powerful grid-based detection with low compute resources. We investigate radar specific challenges and propose sparse kernel point pillars (SKPP) and dual voxel point convolutions (DVPC) as remedies for the grid rendering and sparse backbone architectures. We evaluate our SKPP-DPVCN architecture on nuScenes, which outperforms the baseline by 5.89% and the previous state of the art by 4.19% in Car AP4.0. Moreover, SKPP-DPVCN reduces the average scale error (ASE) by 21.41% over the baseline

    Quantifying Membrane Topology at the Nanoscale

    Get PDF
    Changes in the shape of cellular membranes are linked with viral replication, Alzheimer\u27s, heart disease and an abundance of other maladies. Some membranous organelles, such as the endoplasmic reticulum and the Golgi, are only 50 nm in diameter. As such, membrane shape changes are conventionally studied with electron microscopy (EM), which preserves cellular ultrastructure and achieves a resolution of 2 nm or better. However, immunolabeling in EM is challenging, and often destroys the cell, making it difficult to study interactions between membranes and other proteins. Additionally, cells must be fixed in EM imaging, making it impossible to study mechanisms of disease. To address these problems, this thesis advances nanoscale imaging and analysis of membrane shape changes and their associated proteins using super-resolution single-molecule localization microscopy. This thesis is divided into three parts. In the first, a novel correlative orientation-independent differential interference contrast (OI-DIC) and single-molecule localization microscopy (SMLM) instrument is designed to address challenges with live-cell imaging of membrane nanostructure. SMLM super-resolution fluorescence techniques image with ~ 20 nm resolution, and are compatible with live-cell imaging. However, due to SMLM\u27s slow imaging speeds, most cell movement is under-sampled. OI-DIC images fast, is gentle enough to be used with living cells and can image cellular structure without labelling, but is diffraction-limited. Combining SMLM with OI-DIC allows for imaging of cellular context that can supplement sparse super-resolution data in real time. The second part of the thesis describes an open-source software package for visualizing and analyzing SMLM data. SMLM imaging yields localization point clouds, which requires non-standard visualization and analysis techniques. Existing techniques are described, and necessary new ones are implemented. These tools are designed to interpret data collected from the OI-DIC/SMLM microscope, as well as from other optical setups. Finally, a tool for extracting membrane structure from SMLM point clouds is described. SMLM data is often noisy, containing multiple localizations per fluorophore and many non-specific localizations. SMLM\u27s resolution reveals labelling discontinuities, which exacerbate sparsity of localizations. It is non-trivial to reconstruct the continuous shape of a membrane from a discrete set of points, and even more difficult in the presence of the noise profile characteristic of most SMLM point clouds. To address this, a surface reconstruction algorithm for extracting continuous surfaces from SMLM data is implemented. This method employs biophysical curvature constraints to improve the accuracy of the surface

    Ghost on the Shell: An Expressive Representation of General 3D Shapes

    Full text link
    The creation of photorealistic virtual worlds requires the accurate modeling of 3D surface geometry for a wide range of objects. For this, meshes are appealing since they 1) enable fast physics-based rendering with realistic material and lighting, 2) support physical simulation, and 3) are memory-efficient for modern graphics pipelines. Recent work on reconstructing and statistically modeling 3D shape, however, has critiqued meshes as being topologically inflexible. To capture a wide range of object shapes, any 3D representation must be able to model solid, watertight, shapes as well as thin, open, surfaces. Recent work has focused on the former, and methods for reconstructing open surfaces do not support fast reconstruction with material and lighting or unconditional generative modelling. Inspired by the observation that open surfaces can be seen as islands floating on watertight surfaces, we parameterize open surfaces by defining a manifold signed distance field on watertight templates. With this parameterization, we further develop a grid-based and differentiable representation that parameterizes both watertight and non-watertight meshes of arbitrary topology. Our new representation, called Ghost-on-the-Shell (G-Shell), enables two important applications: differentiable rasterization-based reconstruction from multiview images and generative modelling of non-watertight meshes. We empirically demonstrate that G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks, while also performing effectively for watertight meshes.Comment: Technical Report (26 pages, 16 figures, Project Page: https://gshell3d.github.io/

    Marine Heritage Monitoring with High Resolution Survey Tools: ScapaMAP 2001-2006

    Get PDF
    Archaeologically, marine sites can be just as significant as those on land. Until recently, however, they were not protected in the UK to the same degree, leading to degradation of sites; the difficulty of investigating such sites still makes it problematic and expensive to properly describe, schedule and monitor them. Use of conventional high-resolution survey tools in an archaeological context is changing the economic structure of such investigations however, and it is now possible to remotely but routinely monitor the state of submerged cultural artifacts. Use of such data to optimize expenditure of expensive and rare assets (e.g., divers and on-bottom dive time) is an added bonus. We present here the results of an investigation into methods for monitoring of marine heritage sites, using the remains of the Imperial German Navy (scuttled 1919) in Scapa Flow, Orkney as a case study. Using a baseline bathymetric survey in 2001 and a repeat bathymetric and volumetric survey in 2006, we illustrate the requirements for such surveys over and above normal hydrographic protocols and outline strategies for effective imaging of large wrecks. Suggested methods for manipulation of such data (including processing and visualization) are outlined, and we draw the distinction between products for scientific investigation and those for outreach and education, which have very different requirements. We then describe the use of backscatter and volumetric acoustic data in the investigation of wrecks, focusing on the extra information to be gained from them that is not evident in the traditional bathymetric DTM models or sounding point-cloud representations of data. Finally, we consider the utility of high-resolution survey as part of an integrated site management policy, with particular reference to the economics of marine heritage monitoring and preservation

    MM-PCQA: Multi-Modal Learning for No-reference Point Cloud Quality Assessment

    Full text link
    The visual quality of point clouds has been greatly emphasized since the ever-increasing 3D vision applications are expected to provide cost-effective and high-quality experiences for users. Looking back on the development of point cloud quality assessment (PCQA) methods, the visual quality is usually evaluated by utilizing single-modal information, i.e., either extracted from the 2D projections or 3D point cloud. The 2D projections contain rich texture and semantic information but are highly dependent on viewpoints, while the 3D point clouds are more sensitive to geometry distortions and invariant to viewpoints. Therefore, to leverage the advantages of both point cloud and projected image modalities, we propose a novel no-reference point cloud quality assessment (NR-PCQA) metric in a multi-modal fashion. In specific, we split the point clouds into sub-models to represent local geometry distortions such as point shift and down-sampling. Then we render the point clouds into 2D image projections for texture feature extraction. To achieve the goals, the sub-models and projected images are encoded with point-based and image-based neural networks. Finally, symmetric cross-modal attention is employed to fuse multi-modal quality-aware information. Experimental results show that our approach outperforms all compared state-of-the-art methods and is far ahead of previous NR-PCQA methods, which highlights the effectiveness of the proposed method. The code is available at https://github.com/zzc-1998/MM-PCQA
    • …
    corecore