373 research outputs found

    Explorable images for visualizing volume data

    Full text link

    On in-situ visualization for strongly coupled partitioned fluid-structure interaction

    Get PDF
    We present an integrated in-situ visualization approach for partitioned multi-physics simulation of fluid-structure interaction. The simulation itself is treated as a black box and only the information at the fluid-structure interface is considered, and communicated between the fluid and solid solvers with a separate coupling tool. The visualization of the interface data is performed in conjunction with the fluid solver. Furthermore, we present new visualization techniques for the analysis of the interrelation of the two solvers , with emphasis on the involved error due to discretization in space and time and the reconstruction. Our visualization approach also enables the investigation of these errors with respect of their mutual influence on the two simulation codes and their space-time discretization. For efficient interactive visualization, we employ the concept of explorable spatiotemporal images, which also enables finite-time temporal navigation in an in-situ context. We demonstrate our overall approach and its utility by means of a fluid-structure simulation using OpenFOAM that is coupled by the preCICE software layer

    Volumetric Medical Images Visualization on Mobile Devices

    Get PDF
    Volumetric medical images visualization is an important tool in the diagnosis and treatment of diseases. Through history, one of the most dificult tasks for Medicine Specialists has been the accurate location of broken bones and of the damaged tissues during Chemotherapy treatment, among other applications; like techniques used in Neurological Studies. Thus these situations enhance the need of visualization in Medicine. New technologies, the improvement and development of new hardware as well as software and the updating of old ones for graphic applications have resulted in specialized systems for medical visualization. However the use of these techniques in mobile devices has been poor due to its low performance. In our work, we propose a client-server scheme, where the model is compressed in the server side and is reconstructed in a nal thin-client device. The technique restricts the natural density values to achieve good bone visualization in medical models, transforming the rest of the data to zero. Our proposal uses a tridimensional Haar Wavelet Function locally applied inside units blocks of 16x16x16, similar to the Wavelet Based 3D Compression Scheme for Interactive Visualization of Very Large Volume Data approach. We also implement a quantization algorithm which handles error coeficients according to the frequency distributions of these coe cients. Finally, we made an evaluation of the volume visualization; on current mobile devices .We present the speci cations for the implementation of our technique in the Nokia n900 Mobile Phone

    The world’s wealth in pizza: Improving the comprehension of large numbers through information visualization

    Get PDF
    Extreme numerical magnitudes are part of our daily lives, from science to economics to politics. Specifically for large monetary measures, however, there are no comprehensive models for visualization practitioners to promote their understanding. Previous works on this topic have provided a framework for the visual depiction of complex measures but did not assess its effectiveness in communicating the real magnitude of the presented measures. In this thesis I bring together findings from Information Visualization and numerical cognition to extend the existing framework and assess the effects of different strategies, with a focus on monetary measures. For this, I created three visualization prototypes and conducted a series of user tests focused on insight creation. User tests highlighted advantages and disadvantages for different strategies and yielded various findings for their implementation in Information Visualisation

    Reducing Occlusion in Cinema Databases through Feature-Centric Visualizations

    Get PDF
    In modern supercomputer architectures, the I/O capabilities do not keep up with the computational speed. Image-based techniques are one very promising approach to a scalable output format for visual analysis, in which a reduced output that corresponds to the visible state of the simulation is rendered in-situ and stored to disk. These techniques can support interactive exploration of the data through image compositing and other methods, but automatic methods of highlighting data and reducing clutter can make these methods more effective. In this paper, we suggest a method of assisted exploration through the combination of feature-centric analysis with image space techniques and show how the reduction of the data to features of interest reduces occlusion in the output for a set of example applications

    Explainable Semantic Medical Image Segmentation with Style

    Full text link
    Semantic medical image segmentation using deep learning has recently achieved high accuracy, making it appealing to clinical problems such as radiation therapy. However, the lack of high-quality semantically labelled data remains a challenge leading to model brittleness to small shifts to input data. Most works require extra data for semi-supervised learning and lack the interpretability of the boundaries of the training data distribution during training, which is essential for model deployment in clinical practice. We propose a fully supervised generative framework that can achieve generalisable segmentation with only limited labelled data by simultaneously constructing an explorable manifold during training. The proposed approach creates medical image style paired with a segmentation task driven discriminator incorporating end-to-end adversarial training. The discriminator is generalised to small domain shifts as much as permissible by the training data, and the generator automatically diversifies the training samples using a manifold of input features learnt during segmentation. All the while, the discriminator guides the manifold learning by supervising the semantic content and fine-grained features separately during the image diversification. After training, visualisation of the learnt manifold from the generator is available to interpret the model limits. Experiments on a fully semantic, publicly available pelvis dataset demonstrated that our method is more generalisable to shifts than other state-of-the-art methods while being more explainable using an explorable manifold

    DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning

    Full text link
    We present DRLViz, a visual analytics interface to interpret the internal memory of an agent (e.g. a robot) trained using deep reinforcement learning. This memory is composed of large temporal vectors updated when the agent moves in an environment and is not trivial to understand due to the number of dimensions, dependencies to past vectors, spatial/temporal correlations, and co-correlation between dimensions. It is often referred to as a black box as only inputs (images) and outputs (actions) are intelligible for humans. Using DRLViz, experts are assisted to interpret decisions using memory reduction interactions, and to investigate the role of parts of the memory when errors have been made (e.g. wrong direction). We report on DRLViz applied in the context of video games simulators (ViZDoom) for a navigation scenario with item gathering tasks. We also report on experts evaluation using DRLViz, and applicability of DRLViz to other scenarios and navigation problems beyond simulation games, as well as its contribution to black box models interpretability and explainability in the field of visual analytics

    Crowdsourced Quantification and Visualization of Urban Mobility Space Inequality

    Get PDF
    Most cities are car-centric, allocating a privileged amount of urban space to cars at the expense of sustainable mobility like cycling. Simultaneously, privately owned vehicles are vastly underused, wasting valuable opportunities for accommodating more people in a livable urban environment by occupying spacious parking areas. Since a data-driven quantification and visualization of such urban mobility space inequality is lacking, here we explore how crowdsourced data can help to advance its understanding. In particular, we describe how the open-source online platform What the Street!? uses massive user-generated data from OpenStreetMap for the interactive exploration of city-wide mobility spaces. Using polygon packing and graph algorithms, the platform rearranges all parking and mobility spaces of cars, rails, and bicycles of a city to be directly comparable, making mobility space inequality accessible to a broad public. This crowdsourced method confirms a prevalent imbalance between modal share and space allocation in 23 cities worldwide, typically discriminating bicycles. Analyzing the guesses of the platform’s visitors about mobility space distributions, we find that this discrimination is consistently underestimated in the public opinion. Finally, we discuss a visualized scenario in which extensive parking areas are regained through fleets of shared, autonomous vehicles. We outline how such accessible visualization platforms can facilitate urban planners and policy makers to reclaim road and parking space for pushing forward sustainable transport solutions
    corecore