61,528 research outputs found

    Visualization-specific compression of large volume data

    Get PDF
    Abstrac

    A predictive approach for a real-time remote visualization of large meshes

    Get PDF
    DĂ©jĂ  sur HALRemote access to large meshes is the subject of studies since several years. We propose in this paper a contribution to the problem of remote mesh viewing. We work on triangular meshes. After a study of existing methods of remote viewing, we propose a visualization approach based on a client-server architecture, in which almost all operations are performed on the server. Our approach includes three main steps: a first step of partitioning the original mesh, generating several fragments of the original mesh that can be supported by the supposed smaller Transfer Control Protocol (TCP) window size of the network, a second step called pre-simplification of the mesh partitioned, generating simplified models of fragments at different levels of detail, which aims to accelerate the visualization process when a client(that we also call remote user) requests a visualization of a specific area of interest, the final step involves the actual visualization of an area which interest the client, the latter having the possibility to visualize more accurately the area of interest, and less accurately the areas out of context. In this step, the reconstruction of the object taking into account the connectivity of fragments before simplifying a fragment is necessary.Pestiv-3D projec

    Adaptive transfer functions: improved multiresolution visualization of medical models

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.Peer ReviewedPostprint (author's final draft

    Prioritized Data Compression using Wavelets

    Full text link
    The volume of data and the velocity with which it is being generated by com- putational experiments on high performance computing (HPC) systems is quickly outpacing our ability to effectively store this information in its full fidelity. There- fore, it is critically important to identify and study compression methodologies that retain as much information as possible, particularly in the most salient regions of the simulation space. In this paper, we cast this in terms of a general decision-theoretic problem and discuss a wavelet-based compression strategy for its solution. We pro- vide a heuristic argument as justification and illustrate our methodology on several examples. Finally, we will discuss how our proposed methodology may be utilized in an HPC environment on large-scale computational experiments

    Non uniformity: structural strategy for optimizing functionality in skeletal ligaments

    Full text link
    Ligaments serve as compliant connectors between hard tissues. In that role, they function under various load regimes and directions. However, the 3D structure of ligaments is still considered uniform. The periodontal ligament connects the tooth to the bone and like other ligaments, it sustains different types of loads in various directions. Using the PDL as a model, and employing a fabricated motorized set-up in a microCT instrument, morphological automated segmentation methods and 2nd harmonic generation imaging, we demonstrate that the fibrous network structure within the PDL is not uniform, even before the tooth becomes functional. We find that areas sustaining compression loads are pre-structured with sparse collagenous networks and large blood vessels, whereas other areas contain dense collagen networks with few blood vessels. Therefore, the PDL develops as a non-uniform structure, with an architecture designed to sustain specific types of load in different areas. Based on these findings, we propose that ligaments in general should be regarded as non-uniform entities structured for optimal functioning under variable load regimes.2019-09-26T00:00:00
    • 

    corecore