11 research outputs found

    Three Dimensional Acoustical Imaging Based on Isosurface Technique for Bulk Material

    Get PDF
    The paper introduces the methods with time-resolved technique and isosurface display technique to get two- and three-dimensional (3D) acoustical imaging for scanning acoustic microscopy. Time-resolved technique presents the way to realize two-dimensional (2D) acoustical imaging - A- (O-), B- and C-scan, and a discrete combinatorial 3D image; and isosurface display technique realizes a 3D image with continuous distribution in full direction. The paper proposals a transitional model of square column, which consists of the data of echo signal pattern extracted from volume database, to construct the imaging cube and depict an isosurface using isovalue of internal boundaries in the cube, for the evaluation of internal defects in bulk specimen - Boron Nitride. 3D acoustical imaging has the advantage to show the position, size, appearance, distribution, and tendency of internal structures (voids, inclusions and defects) with complex shapes in non-transparent bulk material. The results show that 3D acoustical visualization presents more affluent, overall and intuitive pattern than 2D imaging for micro-sized structure investigation

    Realtime 3d Mapping, Optimization, and Rendering Based on a Depth Sensor

    Get PDF
    This thesis provides a method for using a portable scanner to create an optimized 3D map for real time rendering. This thesis uses a cloud computing software as a service architecture which allows for a portable scanner to acquire depth maps. Using a portable scanner allows for the mapping of large areas. It will then send the depth maps to a server for a 3D map to be created in real time. This thesis discusses the acquisition of the point cloud using an open source program for 3D mapping with a depth sensor. It then covers the triangulation into a mesh using the marching cubes algorithm. The optimization of the mesh to allow real time rendering is then introduced. Finally the mesh is imported and rendered in the Unreal Engine 3 for an interactive and intuitive display.School of Electrical & Computer Engineerin

    Accurate geometry reconstruction of vascular structures using implicit splines

    Get PDF
    3-D visualization of blood vessel from standard medical datasets (e.g. CT or MRI) play an important role in many clinical situations, including the diagnosis of vessel stenosis, virtual angioscopy, vascular surgery planning and computer aided vascular surgery. However, unlike other human organs, the vasculature system is a very complex network of vessel, which makes it a very challenging task to perform its 3-D visualization. Conventional techniques of medical volume data visualization are in general not well-suited for the above-mentioned tasks. This problem can be solved by reconstructing vascular geometry. Although various methods have been proposed for reconstructing vascular structures, most of these approaches are model-based, and are usually too ideal to correctly represent the actual variation presented by the cross-sections of a vascular structure. In addition, the underlying shape is usually expressed as polygonal meshes or in parametric forms, which is very inconvenient for implementing ramification of branching. As a result, the reconstructed geometries are not suitable for computer aided diagnosis and computer guided minimally invasive vascular surgery. In this research, we develop a set of techniques associated with the geometry reconstruction of vasculatures, including segmentation, modelling, reconstruction, exploration and rendering of vascular structures. The reconstructed geometry can not only help to greatly enhance the visual quality of 3-D vascular structures, but also provide an actual geometric representation of vasculatures, which can provide various benefits. The key findings of this research are as follows: 1. A localized hybrid level-set method of segmentation has been developed to extract the vascular structures from 3-D medical datasets. 2. A skeleton-based implicit modelling technique has been proposed and applied to the reconstruction of vasculatures, which can achieve an accurate geometric reconstruction of the vascular structures as implicit surfaces in an analytical form. 3. An accelerating technique using modern GPU (Graphics Processing Unit) is devised and applied to rendering the implicitly represented vasculatures. 4. The implicitly modelled vasculature is investigated for the application of virtual angioscopy

    Parallel Mesh Processing

    Get PDF
    Die aktuelle Forschung im Bereich der Computergrafik versucht den zunehmenden AnsprĂŒchen der Anwender gerecht zu werden und erzeugt immer realistischer wirkende Bilder. Dementsprechend werden die Szenen und Verfahren, die zur Darstellung der Bilder genutzt werden, immer komplexer. So eine Entwicklung ist unweigerlich mit der Steigerung der erforderlichen Rechenleistung verbunden, da die Modelle, aus denen eine Szene besteht, aus Milliarden von Polygonen bestehen können und in Echtzeit dargestellt werden mĂŒssen. Die realistische Bilddarstellung ruht auf drei SĂ€ulen: Modelle, Materialien und Beleuchtung. Heutzutage gibt es einige Verfahren fĂŒr effiziente und realistische Approximation der globalen Beleuchtung. Genauso existieren Algorithmen zur Erstellung von realistischen Materialien. Es gibt zwar auch Verfahren fĂŒr das Rendering von Modellen in Echtzeit, diese funktionieren aber meist nur fĂŒr Szenen mittlerer KomplexitĂ€t und scheitern bei sehr komplexen Szenen. Die Modelle bilden die Grundlage einer Szene; deren Optimierung hat unmittelbare Auswirkungen auf die Effizienz der Verfahren zur Materialdarstellung und Beleuchtung, so dass erst eine optimierte ModellreprĂ€sentation eine Echtzeitdarstellung ermöglicht. Viele der in der Computergrafik verwendeten Modelle werden mit Hilfe der Dreiecksnetze reprĂ€sentiert. Das darin enthaltende Datenvolumen ist enorm, um letztlich den Detailreichtum der jeweiligen Objekte darstellen bzw. den wachsenden RealitĂ€tsanspruch bewĂ€ltigen zu können. Das Rendern von komplexen, aus Millionen von Dreiecken bestehenden Modellen stellt selbst fĂŒr moderne Grafikkarten eine große Herausforderung dar. Daher ist es insbesondere fĂŒr die Echtzeitsimulationen notwendig, effiziente Algorithmen zu entwickeln. Solche Algorithmen sollten einerseits Visibility Culling1, Level-of-Detail, (LOD), Out-of-Core Speicherverwaltung und Kompression unterstĂŒtzen. Anderseits sollte diese Optimierung sehr effizient arbeiten, um das Rendering nicht noch zusĂ€tzlich zu behindern. Dies erfordert die Entwicklung paralleler Verfahren, die in der Lage sind, die enorme Datenflut effizient zu verarbeiten. Der Kernbeitrag dieser Arbeit sind neuartige Algorithmen und Datenstrukturen, die speziell fĂŒr eine effiziente parallele Datenverarbeitung entwickelt wurden und in der Lage sind sehr komplexe Modelle und Szenen in Echtzeit darzustellen, sowie zu modellieren. Diese Algorithmen arbeiten in zwei Phasen: ZunĂ€chst wird in einer Offline-Phase die Datenstruktur erzeugt und fĂŒr parallele Verarbeitung optimiert. Die optimierte Datenstruktur wird dann in der zweiten Phase fĂŒr das Echtzeitrendering verwendet. Ein weiterer Beitrag dieser Arbeit ist ein Algorithmus, welcher in der Lage ist, einen sehr realistisch wirkenden Planeten prozedural zu generieren und in Echtzeit zu rendern

    Interactive drug-design: using advanced computing to evaluate the induced fit effect

    Get PDF
    This thesis describes the efforts made to provide protein flexibility in a molecular modelling software application, which prior to this work, was operating using rigid proteins and semi flexible ligands. Protein flexibility during molecular modelling simulations is a non-­‐trivial task requiring a great number of floating point operations and it could not be accomplished without the help of supercomputing such as GPGPUs (or possibly Xeon Phi). The thesis is structured as follows. It provides a background section, where the reader can find the necessary context and references in order to be able to understand this report. Next is a state of the art section, which describes what had been done in the fields of molecular dynamics and flexible haptic protein ligand docking prior to this work. An implementation section follows, which lists failed efforts that provided the necessary feedback in order to design efficient algorithms to accomplish this task. Chapter 6 describes in detail an irregular – grid decomposition approach in order to provide fast non-­‐bonded interaction computations for GPGPUs. This technique is also associated with algorithms that provide fast bonded interaction computations and exclusions handling for 1-­‐4 bonded atoms during the non-­‐bonded forces computation part. Performance benchmarks as well as accuracy tables for energy and force computations are provided to demonstrate the efficiency of the methodologies explained in this chapter. Chapter 7 provides an overview of an evolutionary strategy used to overcome the problems associated with the limited capabilities of local search strategies such as steepest descents, which get trapped in the first local minima they find. Our proposed method is able to explore the potential energy landscape in such a way that it can pick competitive uphill solutions to escape local minima in the hope of finding deeper valleys. This methodology is also serving the purpose of providing a good number of conformational updates such that it is able to restore the areas of interaction between the protein and the ligand while searching for optimum global solutions

    Accelerating marching cubes with graphics hardware

    No full text
    isosurface extraction and rendering is crucial for interactive visualization. Previous GPU acceleration techniques have been restricted to tetrahedral meshes. We generalize this work to arbitrary meshes by caching local topology on the video card to reduce both CPU load and bandwidth consumption, demonstrating our results with the Marching Cubes cases. We also present improvements to span space techniques that pre-classify the rangs over which individual cases are used in a given cube. Our results indicate that speedups in excess of tenfold are feasible, compared with speedups of less than twofold demonstrated in previous papers.

    IsoflÀchenrekonstruktion aus Serienschnitten

    Get PDF

    Flexible high performance agent based modelling on graphics card hardware

    Get PDF
    Agent Based Modelling is a technique for computational simulation of complex interacting systems, through the specification of the behaviour of a number of autonomous individuals acting simultaneously. This is a bottom up approach, in contrast with the top down one of modelling the behaviour of the whole system through dynamic mathematical equations. The focus on individuals is considerably more computationally demanding, but provides a natural and flexible environment for studying systems demonstrating emergent behaviour. Despite the obvious parallelism, traditionally frameworks for Agent Based Modelling fail to exploit this and are often based on highly serialised mobile discrete agents. Such an approach has serious implications, placing stringent limitations on both the scale of models and the speed at which they may be simulated. Serial simulation frameworks are also unable to exploit multiple processor architectures which have become essential in improving overall processing speed. This thesis demonstrates that it is possible to use the parallelism of graphics card hardware as a mechanism for high performance Agent Based Modelling. Such an approach is in contrast with alternative high performance architectures, such as distributed grids and specialist computing clusters, and is considerably more cost effective. The use of consumer hardware makes the techniques described available to a wide range of users, and the use of automatically generated simulation code abstracts the process of mapping algorithms to the specialist hardware. This approach avoids the steep learning curve associated with the graphics card hardware's data parallel architecture, which has previously limited the uptake of this emerging technology. The performance and flexibility of this approach are considered through the use of benchmarking and case studies. The resulting speedup and locality of agent data within the graphics processor also allow real time visualisation of computationally and demanding high population models

    GPU data structures for graphics and vision

    Get PDF
    Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by re-introducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.Graphikhardware wurde in den letzen Jahren immer weiter programmierbar. Ihre APIs verwenden das Streamprozessor-Modell, um die massive Parallelisierung auch fĂŒr den Programmierer verfĂŒgbar zu machen. Leider folgen aus dem strikten Streamprozessor-Modell, welches die GPU fĂŒr ihre hohe Rechenleistung benötigt, auch Hindernisse in der Portierung von CPU-Algorithmen zur Video- und Volumenverarbeitung auf die GPU. Serielle DatenabhĂ€ngigkeiten beschleunigen zwar CPU-Verarbeitung, sind aber fĂŒr die daten-parallele GPU kontraproduktiv . Diese Arbeit prĂ€sentiert neue Herangehensweisen fĂŒr bekannte Probleme der Video- und Volumensverarbeitung. Teilweise wird die Verarbeitung mit Hilfe von modifizierten Algorithmen aus der frĂŒhen Computergraphik-Forschung an das beschrĂ€nkte Hardwaremodell angepasst. Anderswo helfen neu entdeckte, hierarchische Datenstrukturen beim Umgang mit den Schreibzugriff-Restriktionen die lange die Portierung von komplexeren Bildanalyseverfahren verhindert hatten. In der 3D-Verarbeitung nutzen wir bekannte Konzepte aus der Computerspielegraphik wie Mipmaps, projektive Texturierung, oder verkettete Texturzugriffe, und zeigen auf welche Vorteile die Video- und Volumenverarbeitung aus hardwarebeschleunigter Graphik-API-Implementation ziehen kann. Die prĂ€sentierten GPU-Datenstrukturen bieten drastisch schnellere Verarbeitung und heben rechenintensive Operationen auf Echtzeit-Niveau. Damit werden neue, interaktive Bildverarbeitungs- und Graphik-Anwendungen möglich
    corecore