17 research outputs found

    High performance computer simulated bronchoscopy with interactive navigation.

    Get PDF
    by Ping-Fu Fung.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 98-102).Abstract also in Chinese.Abstract --- p.ivAcknowledgements --- p.viChapter 1 --- Introduction --- p.1Chapter 1.1 --- Medical Visualization System --- p.4Chapter 1.1.1 --- Data Acquisition --- p.4Chapter 1.1.2 --- Computer-aided Medical Visualization --- p.5Chapter 1.1.3 --- Existing Systems --- p.6Chapter 1.2 --- Research Goal --- p.8Chapter 1.2.1 --- System Architecture --- p.9Chapter 1.3 --- Organization of this Thesis --- p.10Chapter 2 --- Volume Visualization --- p.11Chapter 2.1 --- Sampling Grid and Volume Representation --- p.11Chapter 2.2 --- Priori Work in Volume Rendering --- p.13Chapter 2.2.1 --- Surface VS Direct --- p.14Chapter 2.2.2 --- Image-order VS Object-order --- p.18Chapter 2.2.3 --- Orthogonal VS Perspective --- p.22Chapter 2.2.4 --- Hardware Acceleration VS Software Acceleration --- p.23Chapter 2.3 --- Chapter Summary --- p.29Chapter 3 --- IsoRegion Leaping Technique for Perspective Volume Rendering --- p.30Chapter 3.1 --- Compositing Projection in Direct Volume Rendering --- p.31Chapter 3.2 --- IsoRegion Leaping Acceleration --- p.34Chapter 3.2.1 --- IsoRegion Definition --- p.35Chapter 3.2.2 --- IsoRegion Construction --- p.37Chapter 3.2.3 --- IsoRegion Step Table --- p.38Chapter 3.2.4 --- Ray Traversal Scheme --- p.41Chapter 3.3 --- Experiment Result --- p.43Chapter 3.4 --- Improvement --- p.47Chapter 3.5 --- Chapter Summary --- p.48Chapter 4 --- Parallel Volume Rendering by Distributed Processing --- p.50Chapter 4.1 --- Multi-platform Loosely-coupled Parallel Environment Shell --- p.51Chapter 4.2 --- Distributed Rendering Pipeline (DRP) --- p.55Chapter 4.2.1 --- Network Architecture of a Loosely-Coupled System --- p.55Chapter 4.2.2 --- Data and Task Partitioning --- p.58Chapter 4.2.3 --- Communication Pattern and Analysis --- p.59Chapter 4.3 --- Load Balancing --- p.69Chapter 4.4 --- Heterogeneous Rendering --- p.72Chapter 4.5 --- Chapter Summary --- p.73Chapter 5 --- User Interface --- p.74Chapter 5.1 --- System Design --- p.75Chapter 5.2 --- 3D Pen Input Device --- p.76Chapter 5.3 --- Visualization Environment Integration --- p.77Chapter 5.4 --- User Interaction: Interactive Navigation --- p.78Chapter 5.4.1 --- Camera Model --- p.79Chapter 5.4.2 --- Zooming --- p.81Chapter 5.4.3 --- Image View --- p.82Chapter 5.4.4 --- User Control --- p.83Chapter 5.5 --- Chapter Summary --- p.87Chapter 6 --- Conclusion --- p.88Chapter 6.1 --- Final Summary --- p.88Chapter 6.2 --- Deficiency and Improvement --- p.89Chapter 6.3 --- Future Research Aspect --- p.91Appendix --- p.93Chapter A --- Common Error in Pre-multiplying Color and Opacity --- p.94Chapter B --- Binary Factorization of the Sample Composition Equation --- p.9

    Virtualising visualisation: A distributed service based approach to visualisation on the Grid

    Get PDF
    Context: Current visualisation systems are not designed to work with the large quantities of data produced by scientists today, they rely on the abilities of a single resource to perform all of the processing and visualisation of data which limits the problem size that they can investigate. Objectives: The objectives of this research are to address the issues encountered by scientists with current visualisation systems and the deficiencies highlighted in current visualisation systems. The research then addresses the question:” How do you design the ideal service oriented architecture for visualisation that meets the needs of scientists?” Method: A new design for a visualisation system based upon a Service Oriented Architecture is proposed to address the issues identified, the architecture is implemented using Java and web service technology. The implementation of the architecture also realised several case study scenarios as demonstrators. Evaluation: Evaluation was performed using case study scenarios of scientific problems and performance data was conducted through experimentation. The scenarios were assessed against the requirements for the architecture and the performance data against a base case simulating a single resource implementation. Conclusion: The virtualised visualisation architecture shows promise for applications where visualisation can be performed in a highly parallel manner and where the problem can be easily sub-divided into chunks for distributed processing

    Doctor of Philosophy

    Get PDF
    dissertationRay tracing presents an efficient rendering algorithm for scientific visualization using common visualization tools and scales with increasingly large geometry counts while allowing for accurate physically-based visualization and analysis, which enables enhanced rendering and new visualization techniques. Interactivity is of great importance for data exploration and analysis in order to gain insight into large-scale data. Increasingly large data sizes are pushing the limits of brute-force rasterization algorithms present in the most widely-used visualization software. Interactive ray tracing presents an alternative rendering solution which scales well on multicore shared memory machines and multinode distributed systems while scaling with increasing geometry counts through logarithmic acceleration structure traversals. Ray tracing within existing tools also provides enhanced rendering options over current implementations, giving users additional insight from better depth cues while also enabling publication-quality rendering and new models of visualization such as replicating photographic visualization techniques

    NERSC 'Visualization Greenbook' Future visualization needs of the DOE computational science community hosted at NERSC

    Full text link

    Characterization of multiphase flows integrating X-ray imaging and virtual reality

    Get PDF
    Multiphase flows are used in a wide variety of industries, from energy production to pharmaceutical manufacturing. However, because of the complexity of the flows and difficulty measuring them, it is challenging to characterize the phenomena inside a multiphase flow. To help overcome this challenge, researchers have used numerous types of noninvasive measurement techniques to record the phenomena that occur inside the flow. One technique that has shown much success is X-ray imaging. While capable of high spatial resolutions, X-ray imaging generally has poor temporal resolution. This research improves the characterization of multiphase flows in three ways. First, an X-ray image intensifier is modified to use a high-speed camera to push the temporal limits of what is possible with current tube source X-ray imaging technology. Using this system, sample flows were imaged at 1000 frames per second without a reduction in spatial resolution. Next, the sensitivity of X-ray computed tomography (CT) measurements to changes in acquisition parameters is analyzed. While in theory CT measurements should be stable over a range of acquisition parameters, previous research has indicated otherwise. The analysis of this sensitivity shows that, while raw CT values are strongly affected by changes to acquisition parameters, if proper calibration techniques are used, acquisition parameters do not significantly influence the results for multiphase flow imaging. Finally, two algorithms are analyzed for their suitability to reconstruct an approximate tomographic slice from only two X-ray projections. These algorithms increase the spatial error in the measurement, as compared to traditional CT; however, they allow for very high temporal resolutions for 3D imaging. The only limit on the speed of this measurement technique is the image intensifier-camera setup, which was shown to be capable of imaging at a rate of at least 1000 FPS. While advances in measurement techniques for multiphase flows are one part of improving multiphase flow characterization, the challenge extends beyond measurement techniques. For improved measurement techniques to be useful, the data must be accessible to scientists in a way that maximizes the comprehension of the phenomena. To this end, this work also presents a system for using the Microsoft Kinect sensor to provide natural, non-contact interaction with multiphase flow data. Furthermore, this system is constructed so that it is trivial to add natural, non-contact interaction to immersive visualization applications. Therefore, multiple visualization applications can be built that are optimized to specific types of data, but all leverage the same natural interaction. Finally, the research is concluded by proposing a system that integrates the improved X-ray measurements, with the Kinect interaction system, and a CAVE automatic virtual environment (CAVE) to present scientists with the multiphase flow measurements in an intuitive and inherently three-dimensional manner

    Doctor of Philosophy

    Get PDF
    dissertationDataflow pipeline models are widely used in visualization systems. Despite recent advancements in parallel architecture, most systems still support only a single CPU or a small collection of CPUs such as a SMP workstation. Even for systems that are specifically tuned towards parallel visualization, their execution models only provide support for data-parallelism while ignoring taskparallelism and pipeline-parallelism. With the recent popularization of machines equipped with multicore CPUs and multi-GPU units, these visualization systems are undoubtedly falling further behind in reaching maximum efficiency. On the other hand, there exist several libraries that can schedule program executions on multiple CPUs and/or multiple GPUs. However, due to differences in executing a task graph and a pipeline along with their APIs being considerably low-level, it still remains a challenge to integrate these run-time libraries into current visualization systems. Thus, there is a need for a redesigned dataflow architecture to fully support and exploit the power of highly parallel machines in large-scale visualization. The new design must be able to schedule executions on heterogeneous platforms while at the same time supporting arbitrarily large datasets through the use of streaming data structures. The primary goal of this dissertation work is to develop a parallel dataflow architecture for streaming large-scale visualizations. The framework includes supports for platforms ranging from multicore processors to clusters consisting of thousands CPUs and GPUs. We achieve this in our system by introducing the notion of Virtual Processing Elements and Task-Oriented Modules along with a highly customizable scheduler that controls the assignment of tasks to elements dynamically. This creates an intuitive way to maintain multiple CPU/GPU kernels yet still provide coherency and synchronization across module executions. We have implemented these techniques into HyperFlow which is made of an API with all basic dataflow constructs described in the dissertation, and a distributed run-time library that can be used to deploy those pipelines on multicore, multi-GPU and cluster-based platforms

    Doctor of Philosophy

    Get PDF
    dissertationNeuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. However, the extremely anisotropic resolution of the data makes segmentation and tracking across slices difficult. Furthermore, the thickness of the slices can make the membranes of the neurons hard to identify. Similarly, structures can change significantly from one section to the next due to slice thickness which makes tracking difficult. This thesis presents a complete method for segmenting many neurons at once in two-dimensional (2D) electron microscopy images and reconstructing and visualizing them in three-dimensions (3D). First, we present an advanced method for identifying neuron membranes in 2D, necessary for whole neuron segmentation, using a machine learning approach. The method described uses a series of artificial neural networks (ANNs) in a framework combined with a feature vector that is composed of image and context; intensities sampled over a stencil neighborhood. Several ANNs are applied in series allowing each ANN to use the classification context; provided by the previous network to improve detection accuracy. To improve the membrane detection, we use information from a nonlinear alignment of sequential learned membrane images in a final ANN that improves membrane detection in each section. The final output, the detected membranes, are used to obtain 2D segmentations of all the neurons in an image. We also present a method that constructs 3D neuron representations by formulating the problem of finding paths through sets of sections as an optimal path computation, which applies a cost function to the identification of a cell from one section to the next and solves this optimization problem using Dijkstras algorithm. This basic formulation accounts for variability or inconsistencies between sections and prioritizes cells based on the evidence of their connectivity. Finally, we present a tool that combines these techniques with a visual user interface that enables users to quickly segment whole neurons in large volumes

    Doctor of Philosophy

    Get PDF
    dissertationInteractive editing and manipulation of digital media is a fundamental component in digital content creation. One media in particular, digital imagery, has seen a recent increase in popularity of its large or even massive image formats. Unfortunately, current systems and techniques are rarely concerned with scalability or usability with these large images. Moreover, processing massive (or even large) imagery is assumed to be an off-line, automatic process, although many problems associated with these datasets require human intervention for high quality results. This dissertation details how to design interactive image techniques that scale. In particular, massive imagery is typically constructed as a seamless mosaic of many smaller images. The focus of this work is the creation of new technologies to enable user interaction in the formation of these large mosaics. While an interactive system for all stages of the mosaic creation pipeline is a long-term research goal, this dissertation concentrates on the last phase of the mosaic creation pipeline - the composition of registered images into a seamless composite. The work detailed in this dissertation provides the technologies to fully realize interactive editing in mosaic composition on image collections ranging from the very small to massive in scale

    Autonomic visualisation.

    Get PDF
    This thesis introduces the concept of autonomic visualisation, where principles of autonomic systems are brought to the field of visualisation infrastructure. Problems in visualisation have a specific set of requirements which are not always met by existing systems. The first half of this thesis explores a specific problem for large scale visualisation; that of data management. Visualisation algorithms have somewhat different requirements to other external memory problems, due to the fact that they often require access to all, or a large subset, of the data in a way that is highly dependent on the view. This thesis proposes a knowledge-based approach to pre-fetching in this context, and presents evidence that such an approach yields good performance. The knowledge based approach is incorporated into a five-layer model, which provides a systematic way of categorising and designing out-of-core, or external memory, systems. This model is demonstrated with two example implementations, on in the local and one in the remote context. The second half explores autonomic visualisation in the more general case. A simulation tool, created for the purpose of designing autonomic visualisation infrastructure is presented. This tool, SimEAC, provides a way of facilitating the development of techniques for managing large-scale visualisation systems. The abstract design of the simulation system, as well as details of the implementation are presented. The architecture of the simulator is explored, and then the system is evaluated in a number of case studies indicating some of the ways in which it can be used. The simulator provides a framework for experimentation and rapid prototyping of large scale autonomic systems
    corecore