1,169 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationVisualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this dissertation, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations

    A Machine Learning Approach to Sentinel-3 Feature Extraction In The Context Of Harmful Algal Blooms

    Get PDF
    Harmful Algal Blooms (HAB) are typically described as blooms of phytoplankton species that can not only cause harm to the environment but also humans. Some species that form these blooms can release biotoxins, which accumulate in shellfish [1]. When humans consume contaminated shellfish, it can cause adverse health problems [2]–[4]. Due to the associated risk of contamination, shellfisheries are forced to close, sometimes for months, leading to significant economic losses. Although microscopes enable toxic species identification, and bioassays enable biotoxin identification and quantification, these methods are impractical for continuous monitoring since they require recurrent in situ data sampling, followed by laboratory analysis. Chlorophyll a is a pigment common to almost all marine phytoplankton groups. It has a spectral signature that enables it to be detectable by remote satellites that capture water-leaving radiance [5]. Remote sensing can be very useful since it allows us to take synoptic measurements of large sea areas [6]. Several machine learning algorithms have been researched to detect or forecast algal biomass or HAB presence [7]–[10]. However, the application of remotely sensed images to detect and forecast biotoxin concentration seems relatively unexplored. Given this problem, two datasets with Sentinel-3 imagery patches were created, from along the west coastal region of Portugal, which differ in size and the preprocessing applied. We assessed the application of Machine Learning (ML) models to extract informative features from the datasets. The models were evaluated quantitatively and qualitatively. The qualitative analysis demonstrated how the features extracted by the models seem to be consistent with features extracted for downstream tasks in the literature, suggesting the features retain helpful information. However, at this time, further work Is required to determine whether the feature can be helpful in the task of biotoxin concentration forecasting.Um Harmful Algal Bloom (HAB) é tipicamente descrito como sendo a proliferação de espécies de fitoplâncton que podem causar danos não só ao ambiente, mas também aos humanos. Algumas espécies que formam HABs podem libertar biotoxinas, que se acumulam nos moluscos [1]. Quando o ser humano consome moluscos contaminados, pode causar problemas de saúde adversos [2]–[4]. Devido ao risco associado de contaminação, as áreas de exploração de bivalves são forçadas a fechar, por vezes durante meses, levando a perdas económicas significantes. A clorofila a é um pigmento comum a quase todos os grupos de fitoplâncton marinho e tem uma assinatura espectral que lhe permite ser detectável por satélites remotos que captam a radiância que sai da água do mar [5]. A detecção remota pode ser muito útil, uma vez que nos permite fazer medições sinópticas de grandes áreas marítimas [6]. Foram pesquisados vários modelos de aprendizagem automática para detectar ou prever a presença de biomassa algal ou HAB [7]–[10]. No entanto, a utilização de imagens de detecção remota para detectar e prever a concentração de biotoxinas parece relativamente inexplorada. Dado este problema, foram criados dois conjuntos de dados com patches de imagens do satélite Sentinel-3 ao longo da região costeira ocidental de Portugal, que diferem em tamanho e no pré-processamento aplicado. Avaliámos diferentes modelos de aprendizagem automática para extrair características informativas dos conjuntos de dados. Os modelos foram avaliados quantitativa e qualitativamente. A análise qualitativa demonstrou como a informação extraída pelos modelos parecem ser consistentes com a extraída na literatura para informar outros modelos, sugerindo que as características retêm informação útil. Contudo, neste momento, é necessário trabalho futuro para determinar se a informação pode ser útil na tarefa de previsão da concentração de biotoxinas

    Scalability of Incompressible Flow Computations on Multi-GPU Clusters Using Dual-Level and Tri-Level Parallelism

    Get PDF
    High performance computing using graphics processing units (GPUs) is gaining popularity in the scientific computing field, with many large compute clusters being augmented with multiple GPUs in each node. We investigate hybrid tri-level (MPI-OpenMP-CUDA) parallel implementations to explore the efficiency and scalability of incompressible flow computations on GPU clusters up to 128 GPUS. This work details some of the unique issues faced when merging fine-grain parallelism on the GPU using CUDA with coarse-grain parallelism using OpenMP for intra-node and MPI for inter-node communication. Comparisons between the tri-level MPI-OpenMP-CUDA and dual-level MPI-CUDA implementations are shown using computationally large computational fluid dynamics (CFD) simulations. Our results demonstrate that a tri-level parallel implementation does not provide a significant advantage in performance over the dual-level implementation, however further research is needed to justify our conclusion for a cluster with a high GPU per node density or when using software that can utilize OpenMP’s fine-grain parallelism more effectively

    Doctor of Philosophy

    Get PDF
    dissertationCorrelation is a powerful relationship measure used in many fields to estimate trends and make forecasts. When the data are complex, large, and high dimensional, correlation identification is challenging. Several visualization methods have been proposed to solve these problems, but they all have limitations in accuracy, speed, or scalability. In this dissertation, we propose a methodology that provides new visual designs that show details when possible and aggregates when necessary, along with robust interactive mechanisms that together enable quick identification and investigation of meaningful relationships in large and high-dimensional data. We propose four techniques using this methodology. Depending on data size and dimensionality, the most appropriate visualization technique can be provided to optimize the analysis performance. First, to improve correlation identification tasks between two dimensions, we propose a new correlation task-specific visualization method called correlation coordinate plot (CCP). CCP transforms data into a powerful coordinate system for estimating the direction and strength of correlations among dimensions. Next, we propose three visualization designs to optimize correlation identification tasks in large and multidimensional data. The first is snowflake visualization (Snowflake), a focus+context layout for exploring all pairwise correlations. The next proposed design is a new interactive design for representing and exploring data relationships in parallel coordinate plots (PCPs) for large data, called data scalable parallel coordinate plots (DSPCP). Finally, we propose a novel technique for storing and accessing the multiway dependencies through visualization (MultiDepViz). We evaluate these approaches by using various use cases, compare them to prior work, and generate user studies to demonstrate how our proposed approaches help users explore correlation in large data efficiently. Our results confirmed that CCP/Snowflake, DSPCP, and MultiDepViz methods outperform some current visualization techniques such as scatterplots (SCPs), PCPs, SCP matrix, Corrgram, Angular Histogram, and UntangleMap in both accuracy and timing. Finally, these approaches are applied in real-world applications such as a debugging tool, large-scale code performance data, and large-scale climate data

    An Application Perspective on High-Performance Computing and Communications

    Get PDF
    We review possible and probable industrial applications of HPCC focusing on the software and hardware issues. Thirty-three separate categories are illustrated by detailed descriptions of five areas -- computational chemistry; Monte Carlo methods from physics to economics; manufacturing; and computational fluid dynamics; command and control; or crisis management; and multimedia services to client computers and settop boxes. The hardware varies from tightly-coupled parallel supercomputers to heterogeneous distributed systems. The software models span HPF and data parallelism, to distributed information systems and object/data flow parallelism on the Web. We find that in each case, it is reasonably clear that HPCC works in principle, and postulate that this knowledge can be used in a new generation of software infrastructure based on the WebWindows approach, and discussed in an accompanying paper

    Interactive Visualization on High-Resolution Tiled Display Walls with Network Accessible Compute- and Display-Resources

    Get PDF
    Papers number 2-7 and appendix B and C of this thesis are not available in Munin: 2. Hagen, T-M.S., Johnsen, E.S., Stødle, D., Bjorndalen, J.M. and Anshus, O.: 'Liberating the Desktop', First International Conference on Advances in Computer-Human Interaction (2008), pp 89-94. Available at http://dx.doi.org/10.1109/ACHI.2008.20 3. Tor-Magne Stien Hagen, Oleg Jakobsen, Phuong Hoai Ha, and Otto J. Anshus: 'Comparing the Performance of Multiple Single-Cores versus a Single Multi-Core' (manuscript)4. Tor-Magne Stien Hagen, Phuong Hoai Ha, and Otto J. Anshus: 'Experimental Fault-Tolerant Synchronization for Reliable Computation on Graphics Processors' (manuscript) 5. Tor-Magne Stien Hagen, Daniel Stødle and Otto J. Anshus: 'On-Demand High-Performance Visualization of Spatial Data on High-Resolution Tiled Display Walls', Proceedings of the International Conference on Imaging Theory and Applications and International Conference on Information Visualization Theory and Applications (2010), pages 112-119. Available at http://dx.doi.org/10.5220/0002849601120119 6. Bård Fjukstad, Tor-Magne Stien Hagen, Daniel Stødle, Phuong Hoai Ha, John Markus Bjørndalen and Otto Anshus: 'Interactive Weather Simulation and Visualization on a Display Wall with Many-Core Compute Nodes', Para 2010 – State of the Art in Scientific and Parallel Computing. Available at http://vefir.hi.is/para10/extab/para10-paper-60 7. Tor-Magne Stien Hagen, Daniel Stødle, John Markus Bjørndalen, and Otto Anshus: 'A Step towards Making Local and Remote Desktop Applications Interoperable with High-Resolution Tiled Display Walls', Lecture Notes in Computer Science (2011), Volume 6723/2011, 194-207. Available at http://dx.doi.org/10.1007/978-3-642-21387-8_15The vast volume of scientific data produced today requires tools that can enable scientists to explore large amounts of data to extract meaningful information. One such tool is interactive visualization. The amount of data that can be simultaneously visualized on a computer display is proportional to the display’s resolution. While computer systems in general have seen a remarkable increase in performance the last decades, display resolution has not evolved at the same rate. Increased resolution can be provided by tiling several displays in a grid. A system comprised of multiple displays tiled in such a grid is referred to as a display wall. Display walls provide orders of magnitude more resolution than typical desktop displays, and can provide insight into problems not possible to visualize on desktop displays. However, their distributed and parallel architecture creates several challenges for designing systems that can support interactive visualization. One challenge is compatibility issues with existing software designed for personal desktop computers. Another set of challenges include identifying characteristics of visualization systems that can: (i) Maintain synchronous state and display-output when executed over multiple display nodes; (ii) scale to multiple display nodes without being limited by shared interconnect bottlenecks; (iii) utilize additional computational resources such as desktop computers, clusters and supercomputers for workload distribution; and (iv) use data from local and remote compute- and data-resources with interactive performance. This dissertation presents Network Accessible Compute (NAC) resources and Network Accessible Display (NAD) resources for interactive visualization of data on displays ranging from laptops to high-resolution tiled display walls. A NAD is a display having functionality that enables usage over a network connection. A NAC is a computational resource that can produce content for network accessible displays. A system consisting of NACs and NADs is either push-based (NACs provide NADs with content) or pull-based (NADs request content from NACs). To attack the compatibility challenge, a push-based system was developed. The system enables several simultaneous users to mirror multiple regions from the desktop of their computers (NACs) onto nearby NADs (among others a 22 megapixel display wall) without requiring usage of separate DVI/VGA cables, permanent installation of third party software or opening firewall ports. The system has lower performance than that of a DVI/VGA cable approach, but increases flexibility such as the possibility to share network accessible displays from multiple computers. At a resolution of 800 by 600 pixels, the system can mirror dynamic content between a NAC and a NAD at 38.6 frames per second (FPS). At 1600x1200 pixels, the refresh rate is 12.85 FPS. The bottleneck of the system is frame buffer capturing and encoding/decoding of pixels. These two functional parts are executed in sequence, limiting the usage of additional CPU cores. By pipelining and executing these parts on separate CPU cores, higher frame rates can be expected and by a factor of two in the best case. To attack all presented challenges, a pull-based system, WallScope, was developed. WallScope enables interactive visualization of local and remote data sets on high-resolution tiled display walls. The WallScope architecture comprises a compute-side and a display-side. The compute-side comprises a set of static and dynamic NACs. Static NACs are considered permanent to the system once added. This type of NAC typically has strict underlying security and access policies. Examples of such NACs are clusters, grids and supercomputers. Dynamic NACs are compute resources that can register on-the-fly to become compute nodes in the system. Examples of this type of NAC are laptops and desktop computers. The display-side comprises of a set of NADs and a data set containing data customized for the particular application domain of the NADs. NADs are based on a sort-first rendering approach where a visualization client is executed on each display-node. The state of these visualization clients is provided by a separate state server, enabling central control of load and refresh-rate. Based on the state received from the state server, the visualization clients request content from the data set. The data set is live in that it translates these requests into compute messages and forwards them to available NACs. Results of the computations are returned to the NADs for the final rendering. The live data set is close to the NADs, both in terms of bandwidth and latency, to enable interactive visualization. WallScope can visualize the Earth, gigapixel images, and other data available through the live data set. When visualizing the Earth on a 28-node display wall by combining the Blue Marble data set with the Landsat data set using a set of static NACs, the bottleneck of WallScope is the computation involved in combining the data sets. However, the time used to combine data sets on the NACs decreases by a factor of 23 when going from 1 to 26 compute nodes. The display-side can decode 414.2 megapixels of images per second (19 frames per second) when visualizing the Earth. The decoding process is multi-threaded and higher frame rates are expected using multi-core CPUs. WallScope can rasterize a 350-page PDF document into 550 megapixels of image-tiles and display these image-tiles on a 28-node display wall in 74.66 seconds (PNG) and 20.66 seconds (JPG) using a single quad-core desktop computer as a dynamic NAC. This time is reduced to 4.20 seconds (PNG) and 2.40 seconds (JPG) using 28 quad-core NACs. This shows that the application output from personal desktop computers can be decoupled from the resolution of the local desktop and display for usage on high-resolution tiled display walls. It also shows that the performance can be increased by adding computational resources giving a resulting speedup of 17.77 (PNG) and 8.59 (JPG) using 28 compute nodes. Three principles are formulated based on the concepts and systems researched and developed: (i) Establishing the end-to-end principle through customization, is a principle stating that the setup and interaction between a display-side and a compute-side in a visualization context can be performed by customizing one or both sides; (ii) Personal Computer (PC) – Personal Compute Resource (PCR) duality states that a user’s computer is both a PC and a PCR, implying that desktop applications can be utilized locally using attached interaction devices and display(s), or remotely by other visualization systems for domain specific production of data based on a user’s personal desktop install; and (iii) domain specific best-effort synchronization stating that for distributed visualization systems running on tiled display walls, state handling can be performed using a best-effort synchronization approach, where visualization clients eventually will get the correct state after a given period of time. Compared to state-of-the-art systems presented in the literature, the contributions of this dissertation enable utilization of a broader range of compute resources from a display wall, while at the same time providing better control over where to provide functionality and where to distribute workload between compute-nodes and display-nodes in a visualization context

    Abstract visualization of large-scale time-varying data

    Get PDF
    The explosion of large-scale time-varying datasets has created critical challenges for scientists to study and digest. One core problem for visualization is to develop effective approaches that can be used to study various data features and temporal relationships among large-scale time-varying datasets. In this dissertation, we first present two abstract visualization approaches to visualizing and analyzing time-varying datasets. The first approach visualizes time-varying datasets with succinct lines to represent temporal relationships of the datasets. A time line visualizes time steps as points and temporal sequence as a line. They are generated by sampling the distributions of virtual words across time to study temporal features. The key idea of time line is to encode various data properties with virtual words. We apply virtual words to characterize feature points and use their distribution statistics to measure temporal relationships. The second approach is ensemble visualization, which provides a highly abstract platform for visualizing an ensemble of datasets. Both approaches can be used for exploration, analysis, and demonstration purposes. The second component of this dissertation is an animated visualization approach to study dramatic temporal changes. Animation has been widely used to show trends, dynamic features and transitions in scientific simulations, while animated visualization is new. We present an automatic animation generation approach that simulates the composition and transition of storytelling techniques and synthesizes animations to describe various event features. We also extend the concept of animated visualization to non-traditional time-varying datasets--network protocols--for visualizing key information in abstract sequences. We have evaluated the effectiveness of our animated visualization with a formal user study and demonstrated the advantages of animated visualization for studying time-varying datasets

    Multidimensional computation and visualisation for marine controlled source electromagnetic methods

    Get PDF
    The controlled source electromagnetic method is improving the search for oil and gas in marine settings and is becoming an integral component of many exploration toolkits. While the level of detail and benefit obtained from recorded electromagnetic data sets is limited to the tools available, interpretation is fundamentally restricted by non-unique and equivalent solutions. I create the tools necessary to rapidly compute and visualise multi-dimensional electromagnetic fields generated for a variety of controlled source electromagnetic surveys. This thesis is divided into two parts: the creation of an electromagnetic software framework and the electromagnetic research applications.The creation of a new electromagnetic software framework is covered in Part I. Steps to create and test a modern electromagnetic data structure, three-dimensional visualisation and interactive graphical user interface from the ground up are presented. Bringing together several computer science disciplines ranging from parallel computing, networking and computer human interaction to three-dimensional visualisation, a package specifically tailored to marine controlled source electromagnetic compuation is formed. The electromagnetic framework is comprised of approximately 100,000 lines of new Java code and several third party libraries, which provides low-level graphical, network and execution cross-platform functionality. The software provides a generic framework to integrate most computational engines and algorithms into the coherent global electromagnetic package enabling the interactive forward modelling, inversion and visualisation of electromagnetic data.Part II is comprised of several research applications utilising the developed electromagnetic software framework. Cloud computing and streamline visualisation are covered. These topics are covered to solve several problems in modern controlled source electromagnetic methods. Large 3D electromagnetic modelling and inversion may require days or even weeks to be performed on a single-threaded personal computers. A massively parallelised electromagnetic forward modelling and inversion methods can dramatically was created to improve computational time. The developed ’macro’ parallelisation method facilitated the reduction in computational time by several orders of magnitude with relatively little additional effort and without modification of the internal electromagnetic algorithm. The air wave is a significant component of marine controlled source electromagnetic surveys however there is controversy and confusion over its defintion. The airwave has been described as a reflected, refracted, direct or diffusing wave, which has lead to confusion over its physical reality
    • …
    corecore