2,296 research outputs found

    Interactive Formfinding for Optimised Fabric-Cast Concrete

    Get PDF

    Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences

    Get PDF
    Results: We present an application that enables the quantitative analysis of multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. Conclusions: By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. There is a pressing need for visualization and analysis tools for 5-D live cell image data. We combine accurate unsupervised processes with an intuitive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc

    Three--dimensional medical imaging: Algorithms and computer systems

    Get PDF
    This paper presents an introduction to the field of three-dimensional medical imaging It presents medical imaging terms and concepts, summarizes the basic operations performed in three-dimensional medical imaging, and describes sample algorithms for accomplishing these operations. The paper contains a synopsis of the architectures and algorithms used in eight machines to render three-dimensional medical images, with particular emphasis paid to their distinctive contributions. It compares the performance of the machines along several dimensions, including image resolution, elapsed time to form an image, imaging algorithms used in the machine, and the degree of parallelism used in the architecture. The paper concludes with general trends for future developments in this field and references on three-dimensional medical imaging

    Visualising Volumetric Fractals

    Get PDF
    Fractal images have for many years been a richsource of exploration by those in computer science who also havean interest in graphics. They often served as a way of testing theperformance of new computing hardware and to explore thecapabilities of emerging display technologies. While there havebeen forays by some into 3D geometric fractals, the 3Dequivalents of the Mandelbrot set have been largely ignored. Thisis largely due to the lack of suitable tools for rendering these setsexcept perhaps as isosurfaces, a rather unsatisfactory and limitedrepresentation. The following will illustrate the application ofGPU based raycasting, a now relatively standard approach tovolume rendering, to the representation of volumetric fractals.Leveraging existing software that has been designed for generalvolume visualisation allows the interested 3D fractal explorer tofocus on the mathematical generation of the volume data ratherthan reinventing the entire volume rendering pipeline

    Virtual prototyping with surface reconstruction and freeform geometric modeling using level-set method

    Get PDF
    More and more products with complex geometries are being designed and manufactured by computer aided design (CAD) and rapid prototyping (RP) technologies. Freeform surface is a geometrical feature widely used in modern products like car bodies, airfoils and turbine blades as well as in aesthetic artifacts. How to efficiently design and generate digital prototypes with freeform surfaces is an important issue in CAD. This paper presents the development of a Virtual Sculpting system and addresses the issues of surface reconstruction from dexel data structures and freeform geometric modeling using the level-set method from distance field structure. Our virtual sculpting method is based on the metaphor of carving a solid block into a 3D freeform object using a 3D haptic input device integrated with the computer visualization. This dissertation presents the result of the study and consists primarily of four papers --Abstract, page iv

    Modeling and rendering for development of a virtual bone surgery system

    Get PDF
    A virtual bone surgery system is developed to provide the potential of a realistic, safe, and controllable environment for surgical education. It can be used for training in orthopedic surgery, as well as for planning and rehearsal of bone surgery procedures...Using the developed system, the user can perform virtual bone surgery by simultaneously seeing bone material removal through a graphic display device, feeling the force via a haptic deice, and hearing the sound of tool-bone interaction --Abstract, page iii

    Doctor of Philosophy

    Get PDF
    dissertationVisualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this dissertation, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations

    Combining physical constraints with geometric constraint-based modeling for virtual assembly

    Get PDF
    The research presented in this dissertation aims to create a virtual assembly environment capable of simulating the constant and subtle interactions (hand-part, part-part) that occur during manual assembly, and providing appropriate feedback to the user in real-time. A virtual assembly system called SHARP System for Haptic Assembly and Realistic Prototyping is created, which utilizes simulated physical constraints for part placement during assembly.;The first approach taken in this research attempt utilized Voxmap Point Shell (VPS) software for implementing collision detection and physics-based modeling in SHARP. A volumetric approach, where complex CAD models were represented by numerous small cubic-voxel elements was used to obtain fast physics update rates (500--1000 Hz). A novel dual-handed haptic interface was developed and integrated into the system allowing the user to simultaneously manipulate parts with both hands. However, coarse model approximations used for collision detection and physics-based modeling only allowed assembly when minimum clearance was limited to ∼8-10%.;To provide a solution to the low clearance assembly problem, the second effort focused on importing accurate parametric CAD data (B-Rep) models into SHARP. These accurate B-Rep representations are used for collision detection as well as for simulating physical contacts more accurately. A new hybrid approach is presented, which combines the simulated physical constraints with geometric constraints which can be defined at runtime. Different case studies are used to identify the suitable combination of methods (collision detection, physical constraints, geometric constraints) capable of best simulating intricate interactions and environment behavior during manual assembly. An innovative automatic constraint recognition algorithm is created and integrated into SHARP. The feature-based approach utilized for the algorithm design, facilitates faster identification of potential geometric constraints that need to be defined. This approach results in optimized system performance while providing a more natural user experience for assembly

    Occlusion and Slice-Based Volume Rendering Augmentation for PET-CT

    Get PDF
    Dual-modality positron emission tomography and computed tomography (PET-CT) depicts pathophysiological function with PET in an anatomical context provided by CT. Three-dimensional volume rendering approaches enable visualization of a two-dimensional slice of interest (SOI) from PET combined with direct volume rendering (DVR) from CT. However, because DVR depicts the whole volume, it may occlude a region of interest, such as a tumor in the SOI. Volume clipping can eliminate this occlusion by cutting away parts of the volume, but it requires intensive user involvement in deciding on the appropriate depth to clip. Transfer functions that are currently available can make the regions of interest visible, but this often requires complex parameter tuning and coupled pre-processing of the data to define the regions. Hence, we propose a new visualization algorithm where a SOI from PET is augmented by volumetric contextual information from a DVR of the counterpart CT so that the obtrusiveness from the CT in the SOI is minimized. Our approach automatically calculates an augmentation depth parameter by considering the occlusion information derived from the voxels of the CT in front of the PET SOI. The depth parameter is then used to generate an opacity weight function that controls the amount of contextual information visible from the DVR. We outline the improvements with our visualization approach compared to other slice-based and our previous approaches. We present the preliminary clinical evaluation of our visualization in a series of PET-CT studies from patients with non-small cell lung cancer

    Three-dimensional anatomical atlas of the human body

    Get PDF
    A thesis submitted in partial fulfillment of the requirements for the degree of Doctor in Information Management, specialization in Geographic Information SystemsAnatomical atlases allow mapping the anatomical structures of the human body. Early versions of these systems consisted of analogic representations with informative text and labelled images of the human body. With the advent of computer systems, digital versions emerged and the third dimension was introduced. Consequently, these systems increased their efficiency, allowing more realistic visualizations with improved interactivity. The development of anatomical atlases in geographic information systems (GIS) environments allows the development of platforms with a high degree of interactivity and with tools to explore and analyze the human body. In this thesis, a prototype for the human body representation is developed. The system includes a 3D GIS topological model, a graphical user interface and functions to explore and analyze the interior and the surface of the anatomical structures of the human body. The GIS approach relies essentially on the topological characteristics of the model and on the kind of available functions, which include measurement, identification, selection and analysis. With the incorporation of these functions, the final system has the ability to replicate the kind of information provided by the conventional anatomical atlases and also provides a higher level of functionality, since some of the atlases limitations are precisely features offered by GIS, namely, interactive capabilities, multilayer management, measurement tools, edition mode, allowing the expansion of the information contained in the system, and spatial analyzes
    • …
    corecore