21 research outputs found

    A

    Get PDF
    This paper describes the concept of A-space. A-space is the space where visualization algorithms reside. Every visualization algorithm is a unique point in A-space. Integrated visualizations can be interpreted as an interpolation between known algorithms. The void between algorithms can be considered as a visualization opportunity where a new point in A-space can be reconstructed and new integrated visualizations can be created

    Similarity-based Exploded Views

    Get PDF
    Exploded views are often used in illustration to overcome the problem of occlusion when depicting complex structures. In this paper, we propose a volume visualization technique inspired by exploded views that partitions the volume into a number of parallel slabs and shows them apart from each other. The thickness of slabs is driven by the similarity between partitions. We use an information-theoretic technique for the generation of exploded views. First, the algorithm identifies the viewpoint from which the structure is the highest. Then, the partition of the volume into the most informative slabs for exploding is obtained using two complementary similarity-based strategies. The number of slabs and the similarity parameter are freely adjustable by the user

    Towards Advanced Interactive Visualization for Virtual Atlases

    Get PDF
    Under embargo until: 2020-07-24An atlas is generally defined as a bound collection of tables, charts or illustrations describing a phenomenon. In an anatomical atlas for example, a collection of representative illustrations and text describes anatomy for the purpose of communicating anatomical knowledge. The atlas serves as reference frame for comparing and integrating data from different sources by spatially or semantically relating collections of drawings, imaging data, and/or text. In the field of medical image processing, atlas information is often constructed from a collection of regions of interest, which are based on medical images that are annotated by domain experts. Such an atlas may be employed, for example, for automatic segmentation of medical imaging data. The combination of interactive visualization techniques with atlas information opens up new possibilities for content creation, curation, and navigation in virtual atlases. With interactive visualization of atlas information, students are able to inspect and explore anatomical atlases in ways that were not possible with the traditional method of presenting anatomical atlases in book format, such as viewing the illustrations from other viewpoints. With advanced interaction techniques, it becomes possible to query the data that forms the basis for the atlas, thus empowering researchers to access a wealth of information in new ways. So far, atlas-based visualization has been employed mainly for medical education, as well as biological research. In this survey, we provide an overview of current digital biomedical atlas tasks and applications and summarize relevant visualization techniques. We discuss recent approaches for providing next-generation visual interfaces to navigate atlas data that go beyond common text-based search and hierarchical lists. Finally, we reflect on open challenges and opportunities for the next steps in interactive atlas visualization.acceptedVersio

    Slice and Dice: A Physicalization Workflow for Anatomical Edutainment

    Get PDF
    During the last decades, anatomy has become an interesting topic in education---even for laymen or schoolchildren. As medical imaging techniques become increasingly sophisticated, virtual anatomical education applications have emerged. Still, anatomical models are often preferred, as they facilitate 3D localization of anatomical structures. Recently, data physicalizations (i.e., physical visualizations) have proven to be effective and engaging---sometimes, even more than their virtual counterparts. So far, medical data physicalizations involve mainly 3D printing, which is still expensive and cumbersome. We investigate alternative forms of physicalizations, which use readily available technologies (home printers) and inexpensive materials (paper or semi-transparent films) to generate crafts for anatomical edutainment. To the best of our knowledge, this is the first computer-generated crafting approach within an anatomical edutainment context. Our approach follows a cost-effective, simple, and easy-to-employ workflow, resulting in assemblable data sculptures (i.e., semi-transparent sliceforms). It primarily supports volumetric data (such as CT or MRI), but mesh data can also be imported. An octree slices the imported volume and an optimization step simplifies the slice configuration, proposing the optimal order for easy assembly. A packing algorithm places the resulting slices with their labels, annotations, and assembly instructions on a paper or transparent film of user-selected size, to be printed, assembled into a sliceform, and explored. We conducted two user studies to assess our approach, demonstrating that it is an initial positive step towards the successful creation of interactive and engaging anatomical physicalizations

    Structural focus+context rendering of multiclassified volume data

    Get PDF
    We present a F+C volume rendering system aimed at outlining structural relationships between different classification criteria of a multiclassified voxel model. We clusterize the voxel model into subsets of voxels sharing the same classification criteria and we construct an auxiliary voxel model storing for each voxel an identifier of its associated cluster. We represent the logical structure of the model as a directed graph having as nodes the classification criteria and as edges the inclusion relationships. We define a mapping function between nodes of the graph and clusters. The rendering process consists of two steps. First, given a user query defined in terms of a boolean expression of classification criteria, a parser computes a set of transfer functions on the cluster domain according to structural F+C rules. Then, we render simultaneously the original voxel model and the labelled one applying multimodal 3D texture mapping such that the fragment shader uses the computed transfer functions to apply structural F+C shading. The user interface of our system, based on Tulip, provides a visual feedback on the structure and the selection. We demonstrate the utility of our approach on several datasets.Postprint (published version

    PROCEEDINGS OF THE IEEE SPECIAL ISSUE ON APPLICATIONS OF AUGMENTED REALITY ENVIRONMENTS 1 Augmented Reality for Construction Site Monitoring and Documentation

    Get PDF
    Abstract—Augmented Reality allows for an on-site presentation of information that is registered to the physical environment. Applications from civil engineering, which require users to process complex information, are among those which can benefit particularly highly from such a presentation. In this paper, we will describe how to use Augmented Reality (AR) to support monitoring and documentation of construction site progress. For these tasks, the staff responsible usually requires fast and comprehensible access to progress information to enable comparison to the as-built status as well as to as-planned data. Instead of tediously searching and mapping related information to the actual construction site environment, our AR system allows for the access of information right where it is needed. This is achieved by superimposing progress as well as as-planned information onto the user’s view of the physical environment. For this purpose, we present an approach that uses aerial 3D reconstruction to automatically capture progress information and a mobile AR client for on-site visualization. Within this paper, we will describe in greater detail how to capture 3D, how to register the AR system within the physical outdoor environment, how to visualize progress information in a comprehensible way in an AR overlay and how to interact with this kind of information. By implementing such an AR system, we are able to provide an overview about the possibilities and future applications of AR in the construction industry

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    직접 볼륨 렌더링의 전이 함수 설계에 관한 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 신영길.Although direct volume rendering (DVR) has become a commodity, the design of transfer functions still a challenge. Transfer functions which map data values to optical properties (i.e., colors and opacities) highlight features of interests as well as hide unimportant regions, dramatically impacting on the quality of the visualization. Therefore, for the effective rendering of interesting features, the design of transfer functions is very important and challenging task. Furthermore, manipulation of these transfer functions is tedious and time-consuming task. In this paper, we propose a 3D spatial field for accurately identifying and visually distinguishing interesting features as well as a mechanism for data exploration using multi-dimensional transfer function. First, we introduce a 3D spatial field for the effective visualization of constricted tubular structures, called as a stenosis map which stores the degree of constriction at each voxel. Constrictions within tubular structures are quantified by using newly proposed measures (i.e., line similarity measure and constriction measure) based on the localized structure analysis, and classified with a proposed transfer function mapping the degree of constriction to color and opacity. We show the application results of our method to the visualization of coronary artery stenoses. We present performance evaluations using twenty-eight clinical datasets, demonstrating high accuracy and efficacy of our proposed method. Second, we propose a new multi-dimensional transfer function which incorporates texture features calculated from statistically homogeneous regions. This approach employs parallel coordinates to provide an intuitive interface for exploring a new multi-dimensional transfer function space. Three specific ways to use a new transfer function based on parallel coordinates enables the effective exploration of large and complex datasets. We present a mechanism for data exploration with a new transfer function space, demonstrating the practical efficacy of our proposed method. Through a study on transfer function design for DVR, we propose two useful approaches. First method to saliently visualize the constrictions within tubular structures and interactively adjust the visual appearance of the constrictions delivers a substantial aid in radiologic practice. Furthermore, second method to classify objects with our intuitive interface utilizing parallel coordinates proves to be a powerful tool for complex data exploration.Chapter 1 Introduction 1 1.1 Background 1 1.1.1 Volume rendering 1 1.1.2 Computer-aided diagnosis 3 1.1.3 Parallel coordinates 5 1.2 Problem statement 8 1.3 Main contribution 12 1.4 Organization of dissertation 16 Chapter 2 Related Work 17 2.1 Transfer function 17 2.1.1 Transfer functions based on spatial characteristics 17 2.1.2 Opacity modulation techniques 20 2.1.3 Multi-dimensional transfer functions 22 2.1.4 Manipulation mechanism for transfer functions 25 2.2 Coronary artery stenosis 28 2.3 Parallel coordinates 32 Chapter 3 Volume Visualization of Constricted Tubular Structures 36 3.1 Overview 36 3.2 Localized structure analysis 37 3.3 Stenosis map 39 3.3.1 Overview 39 3.3.2 Detection of tubular structures 40 3.3.3 Stenosis map computation 49 3.4 Stenosis-based classification 52 3.4.1 Overview 52 3.4.2 Constriction-encoded volume rendering 52 3.4.3 Opacity modulation based on constriction 54 3.5 GPU implementation 57 3.6 Experimental results 59 3.6.1 Clinical data preparation 59 3.6.2 Qualitative evaluation 60 3.6.3 Quantitative evaluation 63 3.6.4 Comparison with previous methods 66 3.6.5 Parameter study 69 Chapter 4 Interactive Multi-Dimensional Transfer Function Using Adaptive Block Based Feature Analysis 73 4.1 Overview 73 4.2 Extraction of statistical features 74 4.3 Extraction of texture features 78 4.4 Multi-dimensional transfer function design using parallel coordinates 81 4.5 Experimental results 86 Chapter 5 Conclusion 90 Bibliography 92 초 록 107Docto
    corecore