13 research outputs found

    Application-Driven Compression for Visualizing Large-Scale Time-Varying Data

    Full text link

    Frame-to-frame coherent image-aligned sheet-buffered splatting

    Get PDF
    Splatting is a classical volume rendering technique that has recently gained in popularity for the visualization of point-based suface models. Up to now, there has been few publications on its adaptation to time-varying data. In this paper, we propose a novel frame-to-frame coherent view-aligned sheet-buffer splatting of time-varying data, that tries to reduce as much as possible the memory load and the rendering computations taking into account the similarity in the data and in the images at successive instants of time. The results presented in the paper are encouraging and show that the proposed technique may be useful to explore data through time.Postprint (published version

    Time-varying volume visualization

    Get PDF
    Volume rendering is a very active research field in Computer Graphics because of its wide range of applications in various sciences, from medicine to flow mechanics. In this report, we survey a state-of-the-art on time-varying volume rendering. We state several basic concepts and then we establish several criteria to classify the studied works: IVR versus DVR, 4D versus 3D+time, compression techniques, involved architectures, use of parallelism and image-space versus object-space coherence. We also address other related problems as transfer functions and 2D cross-sections computation of time-varying volume data. All the papers reviewed are classified into several tables based on the mentioned classification and, finally, several conclusions are presented.Preprin

    Concept-driven visualization for terascale data analytics

    Get PDF
    Over the past couple of decades the amount of scientific data sets has exploded. The science community has since been facing the common problem of being drowned in data, and yet starved of information. Identification and extraction of meaningful features from large data sets has become one of the central problems of scientific research, for both simulation as well as sensory data sets. The problems at hand are multifold and need to be addressed concurrently to provide scientists with the necessary tools, methods, and systems. Firstly, the underlying data structures and management need to be optimized for the kind of data most commonly used in scientific research, i.e. terascale time-varying, multi-dimensional, multi-variate, and potentially non-uniform grids. This implies avoidance of data duplication, utilization of a transparent query structure, and use of sophisticated underlying data structures and algorithms.Secondly, in the case of scientific data sets, simplistic queries are not a sufficient method to describe subsets or features. For time-varying data sets, many features can generally be described as local events, i.e. spatially and temporally limited regions with characteristic properties in value space. While most often scientists know quite well what they are looking for in a data set, at times they cannot formally or definitively describe their concept well to computer science experts, especially when based on partially substantiated knowledge. Scientists need to be enabled to query and extract such features or events directly and without having to rewrite their hypothesis into an inadequately simple query language. Thirdly, tools to analyze the quality and sensitivity of these event queries itself are required. Understanding local data sensitivity is a necessity for enabling scientists to refine query parameters as needed to produce more meaningful findings.Query sensitivity analysis can also be utilized to establish trends for event-driven queries, i.e. how does the query sensitivity differ between locations and over a series of data sets. In this dissertation, we present an approach to apply these interdependent measures to aid scientists in better understanding their data sets. An integrated system containing all of the above tools and system parts is presented

    Gestion mémoire dans une infrastructure répartie

    Get PDF
    De nos jours, de plus en plus d'organisations déploient des infrastructures matérielles telles que des clusters ou des grilles. Elles sont utilisées pour héberger des services internet communs tels que l'email, les réseaux sociaux ou le commerce électronique ou pour exécuter des applications scientifiques telles que les simulations nucléaires ou les prédictions météorologiques. La capacité de traitement et de stockage demandée pour répondre à la charge de travail de ces applications ne peut être fournie que par le biais de ces infrastructures matérielles. Ces infrastructures matérielles embarquent des systèmes d'exploitation, qui peuvent potentiellement coopérer dans le but de gérer au mieux les ressources disponibles. Ces systèmes gèrent alors l'allocation des ressources aux applications en fonction des besoins de ces dernières. Ces systèmes visent à garantir la qualité de service et en même temps à gérer de façon optimale les ressources dans le but de limiter les coûts, notamment l'énergie. La communauté scientifique s'est intéressée à la problématique de la gestion des ressources. De nombreuses approches ont été proposées et des solutions ont été mises en œuvre. En réalisant un état de l'art de ces approches, nous constatons que la plupart d'entre elles s'intéressent à la gestion des nœuds dans l'objectif de répartir les calculs d'une façon adéquate pour exploiter de manière optimale la charge processeur. La gestion globale de la mémoire dans de telles infrastructures n'a pas été suffisamment étudiée. En effet, la mémoire est souvent considérée comme une ressource avec une capacité théoriquement illimitée grâce aux mécanismes de swap, mais ces derniers ont des conséquences importantes sur les performances des applications et le coût de fonctionnement de l'infrastructure. Dans cette thèse, nous étudions la conception et l'implantation d'un service de gestion globale de la mémoire dans une infrastructure matérielle. Ce service de gestion de mémoire doit éviter le gaspillage de mémoire et ne doit pas pénaliser les performances des applications hébergées. Nous proposons un service de swap à distance permettant à une machine, plutôt que swapper sur son disque local, de swapper sur la mémoire distante d'une autre machine ayant de la mémoire disponible. Les pages distantes peuvent être déplacées dynamiquement afin d'équilibrer la charge entre les machines. Ceci permet de mutualiser la mémoire et d'économiser les ressources. Un prototype a été implémenté et évalué. ABSTRACT : Nowadays, more and more organizations are deploying large scale infrastructures such as clusters or grids. They are used to host common Internet services such as email, social networks, e-commerce applications or to run scientific applications such as nuclear simulations and weather predictions. Processing power and storage capacities satisfying the workload of these applications can only be provided by such infrastructure. The operating systems deployed on these nodes manage the allocation of application resources and can potentially cooperate in order to manage the available resources according of the application needs. The scientific community is usually interested in the resource management problematic. Many approaches have been proposed and solutions have been implemented. However, we find out that most of them focus on the node management in order to adequately distribute calculations to optimally exploit the CPU load. The global memory management in such infrastructures has not been enough studied. Indeed, memory is often considered as a resource with a theoretically unlimited capacity thanks to the swap capabilities, but swapping has a significant impact on the system performance and the operation cost. In this thesis, we study the design and the implementation of a global memory service management in a large scale infrastructure. This memory management service must avoid wasting memory resources and should not penalize the performance of hosted applications. It is based on remote swap mechanisms. A prototype has been implemented and evaluated

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Doctor of Philosophy

    Get PDF
    dissertationA broad range of applications capture dynamic data at an unprecedented scale. Independent of the application area, finding intuitive ways to understand the dynamic aspects of these increasingly large data sets remains an interesting and, to some extent, unsolved research problem. Generically, dynamic data sets can be described by some, often hierarchical, notion of feature of interest that exists at each moment in time, and those features evolve across time. Consequently, exploring the evolution of these features is considered to be one natural way of studying these data sets. Usually, this process entails the ability to: 1) define and extract features from each time step in the data set; 2) find their correspondences over time; and 3) analyze their evolution across time. However, due to the large data sizes, visualizing the evolution of features in a comprehensible manner and performing interactive changes are challenging. Furthermore, feature evolution details are often unmanageably large and complex, making it difficult to identify the temporal trends in the underlying data. Additionally, many existing approaches develop these components in a specialized and standalone manner, thus failing to address the general task of understanding feature evolution across time. This dissertation demonstrates that interactive exploration of feature evolution can be achieved in a non-domain-specific manner so that it can be applied across a wide variety of application domains. In particular, a novel generic visualization and analysis environment that couples a multiresolution unified spatiotemporal representation of features with progressive layout and visualization strategies for studying the feature evolution across time is introduced. This flexible framework enables on-the-fly changes to feature definitions, their correspondences, and other arbitrary attributes while providing an interactive view of the resulting feature evolution details. Furthermore, to reduce the visual complexity within the feature evolution details, several subselection-based and localized, per-feature parameter value-based strategies are also enabled. The utility and generality of this framework is demonstrated by using several large-scale dynamic data sets

    Supporting Quantitative Visual Analysis in Medicine and Biology in the Presence of Data Uncertainty

    Full text link
    corecore