2,879 research outputs found

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Lichttransportsimulation auf Spezialhardware

    Get PDF
    It cannot be denied that the developments in computer hardware and in computer algorithms strongly influence each other, with new instructions added to help with video processing, encryption, and in many other areas. At the same time, the current cap on single threaded performance and wide availability of multi-threaded processors has increased the focus on parallel algorithms. Both influences are extremely prominent in computer graphics, where the gaming and movie industries always strive for the best possible performance on the current, as well as future, hardware. In this thesis we examine the hardware-algorithm synergies in the context of ray tracing and Monte-Carlo algorithms. First, we focus on the very basic element of all such algorithms - the casting of rays through a scene, and propose a dedicated hardware unit to accelerate this common operation. Then, we examine existing and novel implementations of many Monte-Carlo rendering algorithms on massively parallel hardware, as full hardware utilization is essential for peak performance. Lastly, we present an algorithm for tackling complex interreflections of glossy materials, which is designed to utilize both powerful processing units present in almost all current computers: the Centeral Processing Unit (CPU) and the Graphics Processing Unit (GPU). These three pieces combined show that it is always important to look at hardware-algorithm mapping on all levels of abstraction: instruction, processor, and machine.Zweifelsohne beeinflussen sich Computerhardware und Computeralgorithmen gegenseitig in ihrer Entwicklung: Prozessoren bekommen neue Instruktionen, um zum Beispiel Videoverarbeitung, Verschlüsselung oder andere Anwendungen zu beschleunigen. Gleichzeitig verstärkt sich der Fokus auf parallele Algorithmen, bedingt durch die limitierte Leistung von für einzelne Threads und die inzwischen breite Verfügbarkeit von multi-threaded Prozessoren. Beide Einflüsse sind im Grafikbereich besonders stark , wo es z.B. für die Spiele- und Filmindustrie wichtig ist, die bestmögliche Leistung zu erreichen, sowohl auf derzeitiger und zukünftiger Hardware. In Rahmen dieser Arbeit untersuchen wir die Synergie von Hardware und Algorithmen anhand von Ray-Tracing- und Monte-Carlo-Algorithmen. Zuerst betrachten wir einen grundlegenden Hardware-Bausteins für alle diese Algorithmen, die Strahlenverfolgung in einer Szene, und präsentieren eine spezielle Hardware-Einheit zur deren Beschleunigung. Anschließend untersuchen wir existierende und neue Implementierungen verschiedener MonteCarlo-Algorithmen auf massiv-paralleler Hardware, wobei die maximale Auslastung der Hardware im Fokus steht. Abschließend stellen wir dann einen Algorithmus zur Berechnung von komplexen Beleuchtungseffekten bei glänzenden Materialien vor, der versucht, die heute fast überall vorhandene Kombination aus Hauptprozessor (CPU) und Grafikprozessor (GPU) optimal auszunutzen. Zusammen zeigen diese drei Aspekte der Arbeit, wie wichtig es ist, Hardware und Algorithmen auf allen Ebenen gleichzeitig zu betrachten: Auf den Ebenen einzelner Instruktionen, eines Prozessors bzw. eines gesamten Systems

    vorgelegt von

    Get PDF
    Prof. Dr. N. NavabTo my familyAcknowledgements I am deeply grateful that I had the opportunity to write this thesis while working at the Chair for Pattern Recognition within the project B6 of the Sonderforschungsbereich 603 (funded by Deutsche Forschungsgemeinschaft). Many people contributed to this work and I want to express my gratitude to all of them

    Stereoscopic Sketchpad: 3D Digital Ink

    Get PDF
    --Context-- This project looked at the development of a stereoscopic 3D environment in which a user is able to draw freely in all three dimensions. The main focus was on the storage and manipulation of the ‘digital ink’ with which the user draws. For a drawing and sketching package to be effective it must not only have an easy to use user interface, it must be able to handle all input data quickly and efficiently so that the user is able to focus fully on their drawing. --Background-- When it comes to sketching in three dimensions the majority of applications currently available rely on vector based drawing methods. This is primarily because the applications are designed to take a users two dimensional input and transform this into a three dimensional model. Having the sketch represented as vectors makes it simpler for the program to act upon its geometry and thus convert it to a model. There are a number of methods to achieve this aim including Gesture Based Modelling, Reconstruction and Blobby Inflation. Other vector based applications focus on the creation of curves allowing the user to draw within or on existing 3D models. They also allow the user to create wire frame type models. These stroke based applications bring the user closer to traditional sketching rather than the more structured modelling methods detailed. While at present the field is inundated with vector based applications mainly focused upon sketch-based modelling there are significantly less voxel based applications. The majority of these applications focus on the deformation and sculpting of voxmaps, almost the opposite of drawing and sketching, and the creation of three dimensional voxmaps from standard two dimensional pixmaps. How to actually sketch freely within a scene represented by a voxmap has rarely been explored. This comes as a surprise when so many of the standard 2D drawing programs in use today are pixel based. --Method-- As part of this project a simple three dimensional drawing program was designed and implemented using C and C++. This tool is known as Sketch3D and was created using a Model View Controller (MVC) architecture. Due to the modular nature of Sketch3Ds system architecture it is possible to plug a range of different data structures into the program to represent the ink in a variety of ways. A series of data structures have been implemented and were tested for efficiency. These structures were a simple list, a 3D array, and an octree. They have been tested for: the time it takes to insert or remove points from the structure; how easy it is to manipulate points once they are stored; and also how the number of points stored effects the draw and rendering times. One of the key issues brought up by this project was devising a means by which a user is able to draw in three dimensions while using only two dimensional input devices. The method settled upon and implemented involves using the mouse or a digital pen to sketch as one would in a standard 2D drawing package but also linking the up and down keyboard keys to the current depth. This allows the user to move in and out of the scene as they draw. A couple of user interface tools were also developed to assist the user. A 3D cursor was implemented and also a toggle, which when on, highlights all of the points intersecting the depth plane on which the cursor currently resides. These tools allow the user to see exactly where they are drawing in relation to previously drawn lines. --Results-- The tests conducted on the data structures clearly revealed that the octree was the most effective data structure. While not the most efficient in every area, it manages to avoid the major pitfalls of the other structures. The list was extremely quick to render and draw to the screen but suffered severely when it comes to finding and manipulating points already stored. In contrast the three dimensional array was able to erase or manipulate points effectively while the draw time rendered the structure effectively useless, taking huge amounts of time to draw each frame. The focus of this research was on how a 3D sketching package would go about storing and accessing the digital ink. This is just a basis for further research in this area and many issues touched upon in this paper will require a more in depth analysis. The primary area of this future research would be the creation of an effective user interface and the introduction of regular sketching package features such as the saving and loading of images

    FASTER: Facilitating Analysis and Synthesis Technologies for Effective Reconfiguration

    Get PDF
    The FASTER (Facilitating Analysis and Synthesis Technologies for Effective Reconfiguration) EU FP7 project, aims to ease the design and implementation of dynamically changing hardware systems. Our motivation stems from the promise reconfigurable systems hold for achieving high performance and extending product functionality and lifetime via the addition of new features that operate at hardware speed. However, designing a changing hardware system is both challenging and time-consuming. FASTER facilitates the use of reconfigurable technology by providing a complete methodology enabling designers to easily specify, analyze, implement and verify applications on platforms with general-purpose processors and acceleration modules implemented in the latest reconfigurable technology. Our tool-chain supports both coarse- and fine-grain FPGA reconfiguration, while during execution a flexible run-time system manages the reconfigurable resources. We target three applications from different domains. We explore the way each application benefits from reconfiguration, and then we asses them and the FASTER tools, in terms of performance, area consumption and accuracy of analysis

    Towards Geo Decision Support Systems for Renewable Energy Outreach

    Get PDF
    La Tierra se encuentra afectada por numerosos fenómenos tales como los desastres naturales, sobre urbanización, contaminación, etc. Todas estas actividades afectan enormemente a los recursos naturales del planeta llevando a la escasez de los mismos. Un tema especialmente relevante es el uso exhaustivo de energía fósil y su impacto negativo sobre nuestro medio ambiente. Resulta de este modo fundamental la búsqueda de nuevos recursos energéticos limpios para satisfacer nuestras necesidades y reducir la dependencia de recursos energéticos fósiles. La transformación de una infraestructura de generación de energía basada en recursos fósiles a otra basada en recursos energéticos renovables tales como eólica, solar y energía hidroeléctrica llevará a un mejor mantenimiento del medio ambiente ya que supondrá poco o ningún efecto en el calentamiento global por las emisiones, y a una reducción de la dependencia de fuentes de energía fósil. Las energías renovables son una fuente natural de energía que tiene importantes beneficios ya que proporciona un sistema de producción de energía confiable, con precios de la energía estables, puestos de trabajo especializados, y beneficios económicos y el medio ambiente. La energía solar es una de las mejores energías renovables. El sol es la fuente natural y fundamental de la existencia humana sobre la tierra y afecta a todos los procesos químicos, físicos y biológicos. Una hora de la energía del sol en la tierra es suficiente para alimentar a todo el planeta durante un año. La energía del sol o la radiación solar y su presencia geográfica determinan posibles inversiones en energía solar y las estrategias de desarrollo de las mismas. De este modo es esencial para poder proporcionar respuestas relacionadas con el "qué, quién, cuando y donde". Por ejemplo: ¿Cuál es el perfil de trabajo que mejor adapta a una posición gerencial de las energías renovables? ¿Dónde está el mejor lugar para invertir en huertos solares y/o parques eólicos? ¿En qué fecha se registra la más alta productividad? ¿Por qué este lugar no es apto para proyectos hidráulicos? ¿Por qué hay un bajón en la radiación solar en el año 2000 frente a 2012? Etc. En general, la toma de decisiones es el proceso de seleccionar la mejor opción viable de un conjunto de posibles maneras de hacer las cosas. Los Sistemas de Soporte de Decisión (del inglés Decision Support System, DSS) constituyen un ecosistema cognitivo que facilita la interacción entre los seres humanos y los datos para facilitar de forma profunda, significativa y útil la creación de soluciones efectivas en tiempo y costes. Grandes almacenamientos de Datos (Data warehousing), procesos de Extracción, Transformación y Carga (del inglés Extract Transform and Load, ETL) y la Inteligencia de Negocios (del ingles Business Intelligence, BI) son aspectos tecnológicos clave vinculados a la toma de decisiones. Además, la toma de decisiones en el contexto de la energía solar depende de Sistemas de Información Geográfica. Aunque la energía del Sol está disponible en todo el mundo, es evidente que la energía solar es más abundante cerca de los trópicos. Por ejemplo, una inversión en plantas de energía fotovoltaica en lugares cerca de los trópicos y del ecuador requerirá menos tiempo para su amortización. Dependiendo de la ubicación geográfica y las condiciones climáticas, la intensidad solar varía. Por esta razón, es importante seleccionar la ubicación adecuada que optimice la inversión teniendo en cuenta factores como la intensidad de la radiación solar, clima, tierras aptas y economía. Hay modelos como Global atlas y SimuSOLAR que dan información de idoneidad sobre la radiación solar y las ubicaciones. Sin embargo, estos modelos están restringidos a expertos, cubren áreas geográficas limitadas, no son aptos para casos de uso diferentes de los inicialmente previstos, y adolecen de falta de informes detallados e intuitivos para el público en general. El desarrollo de una cartografía extensa sobre la relación de zonas de sol y de sombra es un trabajo muy complejo que involucra diversos conceptos y retos de ingeniería, necesitando de la integración de diferentes modelos de datos, de calidad y cantidad heterogéneas, con limitaciones presupuestarias, etc. El objetivo de los trabajos de investigación desarrollados ha sido establecer la arquitectura de software para el desarrollo de Sistemas de Soporte de Decisión en el ámbito de las energías renovables en general, y de la energía solar en particular. La característica clave de este enfoque de arquitectura de software es ser capaz de proporcionar Sistemas de Soporte de Decisión que ofrezcan servicios de bajo coste ("low cost") en este contexto. Hagamos una analogía. Imagínese que usted está buscando comprar o alquilar una casa en España. Quiere analizar las características del edificio (por ejemplo dimensiones, jardín, más de una edificación en la parcela) y su entorno (por ejemplo, conexiones, servicios). Para realizar esta tarea puede utilizar los datos gratuitos proporcionados por la Oficina Virtual del Catastro de España junto con imágenes libres de un proveedor de ortofotografías (por ejemplo PNOA, Google o Bing) y datos contextuales libres procedentes de otros organismos locales, regionales y/o nacionales (por ejemplo el Ayuntamiento de Zaragoza, el Gobierno de Aragón, el proyecto Cartociudad). Si alguien integra todos estos orígenes de datos en un sistema (por ejemplo el cliente del servicio de mapas de la Infraestructura de Datos Espaciales de España, IDEE), tiene un Sistema de Soporte de Decisión "low cost" para comprar o alquilar una casa. Este trabajo de investigación tiene como objetivo el desarrollo de un enfoque de arquitectura de software que podría proporcionar un Sistema de Soporte de Decisión "low cost" cuando los consumidores necesitan tomar decisiones relacionadas con las energías renovables, en particular sistemas de energía solar, como podría ser la selección de la mejor opción para instalar un sistema solar, o decidir una inversión en una granja solar comunitaria. Una parte importante de este proceso de investigación ha consistido en el análisis sobre la idoneidad de las tecnologías vinculadas a Grandes almacenamientos de Datos y procesos de Extracción, Transformación y Carga para almacenar y procesar gran cantidad de datos históricos referentes a la energía, e Inteligencia de Negocios para la estructuración y presentación de informes. Por otro lado, ha sido necesario centrar el trabajo en modelos de negocio abierto (infraestructura de servicios web, modelos de datos 3D, técnicas de representación de datos sobre zonas de sol y sombra, y fuentes de datos) para el desarrollo económico del producto. Además, este trabajo identifica casos de uso donde los Sistemas de Soporte de Decisión deben ser el instrumento de resolución de problemas de mercado y de problemas científicos. Por lo tanto, esta tesis tiene como objetivo enfatizar y adoptar las tecnologías citadas para proponer un Sistema de Soporte de Decisión completo para un mejor uso potencial de las energías renovables que denominamos REDSS (del inglés Renewable Energy Decision Support System). El trabajo de investigación ha sido desarrollado con el objeto de encontrar respuestas a las siguientes preguntas de investigación: Preguntas relacionadas a los datos: - ¿Cómo elegir el proceso de creación de datos más adecuado para crear modelos geográficos cuyo coste económico sea razonable? Preguntas relacionadas con la tecnología: - ¿Qué limitaciones tecnológicas actuales tienen las herramientas computacionales para el cálculo de la intensidad y sombra solar? - ¿Cómo se puede adaptar conceptos como Grandes almacenamientos de Datos y la Inteligencia de Negocios en el campo de las energías renovables? - ¿Cómo estructurar y organizar datos relacionados con la intensidad solar y la sombra? - ¿Cuáles son las diferencias significativas entre el método propuesto y otros servicios globales existentes? Preguntas relacionadas con casos de uso: - ¿Cuáles son los casos de uso de REDSS? - ¿Cuáles son los beneficios de REDSS para expertos y público en general? Para darle una forma concreta a la contribución y el enfoque propuesto, se ha desarrollado un prototipo denominado Energy2People basado en principios de Inteligencia de Negocio que no sólo proporciona datos de localización avanzada sino que es una base sobre la que para desarrollar futuros productos comerciales. En su conformación actual, esta herramienta ayuda a descubrir y representar las relaciones de datos clave en el sector de las energías renovables y, permite descubrir al público en general relaciones entre los datos en casos donde no era evidente. Esencialmente, el enfoque propuesto conduce a un aumento en el rendimiento de gestión y visualización de datos. Las principales aportaciones de esta tesis pueden resumirse como siguen: - En primer lugar, esta tesis hace una revisión de varios modelos de sol-sombra de código abierto y cerrado para identificar el alcance de la necesidad de modelos de decisión y de su soporte efectivo. Además, proporciona información detallada sobre fuentes de información gratuita relacionada con datos de radiación solar. - En segundo lugar, se plantea un armazón conceptual para el desarrollo de modelos geográficos de bajo coste. Como ejemplo de la aplicación de esta aproximación se ha desarrollado un modelo de bajo coste de ciudad virtual 3D utilizando datos catastrales públicamente disponibles vía servicios Web. - En tercer lugar, este trabajo propone el uso de REDSS al problema de la toma de decisiones en el campo de la energía solar. Este modelo también cuenta con otros puntos distinguibles como los enfoques de co-creación y Mix-and-match. - En cuarto lugar, esta tesis identifica varios escenarios de aplicaciones reales y varios tipos de actores que deberían salir beneficiados por la aplicación de esta estrategia. - Por último, esta tesis presenta el prototipo "Enery2People" desarrollado para explorar datos de localización de la radiación solar y eventos temporales que sirve como ejemplo práctico de la aproximación planteada en esta tesis. Para hacer más claro el potencial del enfoque propuesto, este prototipo es comparado con otros Atlas Internacionales de la energía renovable

    Interactive real-time three-dimensional visualisation of virtual textiles

    Get PDF
    Virtual textile databases provide a cost-efficient alternative to the use of existing hardcover sample catalogues. By taking advantage of the high performance features offered by the latest generation of programmable graphics accelerator boards, it is possible to combine photometric stereo methods with 3D visualisation methods to implement a virtual textile database. In this thesis, we investigate and combine rotation invariant texture retrieval with interactive visualisation techniques. We use a 3D surface representation that is a generic data representation that allows us to combine real-time interactive 3D visualisation methods with present day texture retrieval methods. We begin by investigating the most suitable data format for the 3D surface representation and identify relief-mapping combined with Bézier surfaces as the most suitable 3D surface representations for our needs, and go on to describe how these representation can be combined for real-time rendering. We then investigate ten different methods of implementing rotation invariant texture retrieval using feature vectors. These results show that first order statistics in the form of histogram data are very effective for discriminating colour albedo information, while rotation invariant gradient maps are effective for distinguishing between different types of micro-geometry using either first or second order statistics.Engineering and physical Sciences Research (EPSRC

    Real-time Global Illumination by Simulating Photon Mapping

    Get PDF
    corecore