245 research outputs found

    Slice and Dice: A Physicalization Workflow for Anatomical Edutainment

    Get PDF
    During the last decades, anatomy has become an interesting topic in education---even for laymen or schoolchildren. As medical imaging techniques become increasingly sophisticated, virtual anatomical education applications have emerged. Still, anatomical models are often preferred, as they facilitate 3D localization of anatomical structures. Recently, data physicalizations (i.e., physical visualizations) have proven to be effective and engaging---sometimes, even more than their virtual counterparts. So far, medical data physicalizations involve mainly 3D printing, which is still expensive and cumbersome. We investigate alternative forms of physicalizations, which use readily available technologies (home printers) and inexpensive materials (paper or semi-transparent films) to generate crafts for anatomical edutainment. To the best of our knowledge, this is the first computer-generated crafting approach within an anatomical edutainment context. Our approach follows a cost-effective, simple, and easy-to-employ workflow, resulting in assemblable data sculptures (i.e., semi-transparent sliceforms). It primarily supports volumetric data (such as CT or MRI), but mesh data can also be imported. An octree slices the imported volume and an optimization step simplifies the slice configuration, proposing the optimal order for easy assembly. A packing algorithm places the resulting slices with their labels, annotations, and assembly instructions on a paper or transparent film of user-selected size, to be printed, assembled into a sliceform, and explored. We conducted two user studies to assess our approach, demonstrating that it is an initial positive step towards the successful creation of interactive and engaging anatomical physicalizations

    ScaleTrotter: Illustrative Visual Travels Across Negative Scales

    Full text link
    We present ScaleTrotter, a conceptual framework for an interactive, multi-scale visualization of biological mesoscale data and, specifically, genome data. ScaleTrotter allows viewers to smoothly transition from the nucleus of a cell to the atomistic composition of the DNA, while bridging several orders of magnitude in scale. The challenges in creating an interactive visualization of genome data are fundamentally different in several ways from those in other domains like astronomy that require a multi-scale representation as well. First, genome data has intertwined scale levels---the DNA is an extremely long, connected molecule that manifests itself at all scale levels. Second, elements of the DNA do not disappear as one zooms out---instead the scale levels at which they are observed group these elements differently. Third, we have detailed information and thus geometry for the entire dataset and for all scale levels, posing a challenge for interactive visual exploration. Finally, the conceptual scale levels for genome data are close in scale space, requiring us to find ways to visually embed a smaller scale into a coarser one. We address these challenges by creating a new multi-scale visualization concept. We use a scale-dependent camera model that controls the visual embedding of the scales into their respective parents, the rendering of a subset of the scale hierarchy, and the location, size, and scope of the view. In traversing the scales, ScaleTrotter is roaming between 2D and 3D visual representations that are depicted in integrated visuals. We discuss, specifically, how this form of multi-scale visualization follows from the specific characteristics of the genome data and describe its implementation. Finally, we discuss the implications of our work to the general illustrative depiction of multi-scale data

    Feature-assisted interactive geometry reconstruction in 3D point clouds using incremental region growing

    Full text link
    Reconstructing geometric shapes from point clouds is a common task that is often accomplished by experts manually modeling geometries in CAD-capable software. State-of-the-art workflows based on fully automatic geometry extraction are limited by point cloud density and memory constraints, and require pre- and post-processing by the user. In this work, we present a framework for interactive, user-driven, feature-assisted geometry reconstruction from arbitrarily sized point clouds. Based on seeded region-growing point cloud segmentation, the user interactively extracts planar pieces of geometry and utilizes contextual suggestions to point out plane surfaces, normal and tangential directions, and edges and corners. We implement a set of feature-assisted tools for high-precision modeling tasks in architecture and urban surveying scenarios, enabling instant-feedback interactive point cloud manipulation on large-scale data collected from real-world building interiors and facades. We evaluate our results through systematic measurement of the reconstruction accuracy, and interviews with domain experts who deploy our framework in a commercial setting and give both structured and subjective feedback.Comment: 13 pages, submitted to Computers & Graphics Journa

    Hybrid visibility compositing and masking for illustrative rendering

    Get PDF
    In this paper, we introduce a novel framework for the compositing of interactively rendered 3D layers tailored to the needs of scientific illustration. Currently, traditional scientific illustrations are produced in a series of composition stages, combining different pictorial elements using 2D digital layering. Our approach extends the layer metaphor into 3D without giving up the advantages of 2D methods. The new compositing approach allows for effects such as selective transparency, occlusion overrides, and soft depth buffering. Furthermore, we show how common manipulation techniques such as masking can be integrated into this concept. These tools behave just like in 2D, but their influence extends beyond a single viewpoint. Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others. Our implementation exploits current graphics hardware and permits real-time interaction and rendering.publishedVersio

    Residency Octree: A Hybrid Approach for Scalable Web-Based Multi-Volume Rendering

    Full text link
    We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution.Comment: VIS 2023 - full pape

    MuSIC: Multi-Sequential Interactive Co-Registration for Cancer Imaging Data based on Segmentation Masks

    Get PDF
    In gynecologic cancer imaging, multiple magnetic resonance imaging (MRI) sequences are acquired per patient to reveal different tissue characteristics. However, after image acquisition, the anatomical structures can be misaligned in the various sequences due to changing patient location in the scanner and organ movements. The co-registration process aims to align the sequences to allow for multi-sequential tumor imaging analysis. However, automatic co-registration often leads to unsatisfying results. To address this problem, we propose the web-based application MuSIC (Multi-Sequential Interactive Co-registration). The approach allows medical experts to co-register multiple sequences simultaneously based on a pre-defined segmentation mask generated for one of the sequences. Our contributions lie in our proposed workflow. First, a shape matching algorithm based on dual annealing searches for the tumor position in each sequence. The user can then interactively adapt the proposed segmentation positions if needed. During this procedure, we include a multi-modal magic lens visualization for visual quality assessment. Then, we register the volumes based on the segmentation mask positions. We allow for both rigid and deformable registration. Finally, we conducted a usability analysis with seven medical and machine learning experts to verify the utility of our approach. Our participants highly appreciate the multi-sequential setup and see themselves using MuSIC in the future. Best Paper Honorable Mention at VCBM2022publishedVersio

    Análisis visual en Geología

    Get PDF
    Los geólogos usualmente trabajan con rocas que tienen edades oscilando entre pocos a miles de millones de años. Uno de los objetivos es tratar de reconstruir los ambientes geológicos donde se formaron las rocas y la sucesión de eventos que las afectaron desde su formación a fin de comprender la evolución geológica de la Tierra, identificar regiones donde se localizan depósitos minerales de interés económico, recursos de combustibles, etc. Para alcanzar estos objetivos, recolectan información y muestras de rocas y minerales en el campo, En particular estos últimos son analizados en laboratorio con instrumentos para obtener datos geoquímicos de minerales, como por ejemplo de los que conforman el grupo del espinelo. Dada la gran cantidad de datos generados, los científicos se ven obligados a analizar grandes volúmenes de información para arribar a conclusiones basadas en datos objetivos. El flujo del trabajo de análisis de los geólogos incluye el uso tedioso de varias herramientas y métodos manuales relativamente complejos y propensos a errores para comparar diferentes gráficos y tablas. Para mejorarlo, los integrantes de este proyecto desarrollaron un framework de análisis visual de datos geológicos. Una realimentación muy positiva de los expertos del dominio sobre éste y el gran potencial de mejoramiento motiva esta línea de trabajo.Eje: Computación Gráfica, Imágenes y VisualizaciónRed de Universidades con Carreras en Informática (RedUNCI
    corecore