32 research outputs found

    Scale Stain: Multi-Resolution Feature Enhancement in Pathology Visualization

    Full text link
    Digital whole-slide images of pathological tissue samples have recently become feasible for use within routine diagnostic practice. These gigapixel sized images enable pathologists to perform reviews using computer workstations instead of microscopes. Existing workstations visualize scanned images by providing a zoomable image space that reproduces the capabilities of the microscope. This paper presents a novel visualization approach that enables filtering of the scale-space according to color preference. The visualization method reveals diagnostically important patterns that are otherwise not visible. The paper demonstrates how this approach has been implemented into a fully functional prototype that lets the user navigate the visualization parameter space in real time. The prototype was evaluated for two common clinical tasks with eight pathologists in a within-subjects study. The data reveal that task efficiency increased by 15% using the prototype, with maintained accuracy. By analyzing behavioral strategies, it was possible to conclude that efficiency gain was caused by a reduction of the panning needed to perform systematic search of the images. The prototype system was well received by the pathologists who did not detect any risks that would hinder use in clinical routine

    Foveated Rendering Techniques in Modern Computer Graphics

    Get PDF
    Foveated rendering coupled with eye-tracking has the potential to dramatically accelerate interactive 3D graphics with minimal loss of perceptual detail. I have developed a new foveated rendering technique: Kernel Foveated Rendering (KFR), which parameterizes foveated rendering by embedding polynomial kernel functions in log-polar mapping. This GPU-driven technique uses parameterized foveation that mimics the distribution of photoreceptors in the human retina. I present a two-pass kernel foveated rendering pipeline that maps well onto modern GPUs. In the first pass, I compute the kernel log-polar transformation and render to a reduced-resolution buffer. In the second pass, I have carried out the inverse-log-polar transformation with anti-aliasing to map the reduced-resolution rendering to the full-resolution screen. I carry out user studies to empirically identify the KFR parameters and observe a 2.8X-3.2X speedup in rendering on 4K displays. The eye-tracking-guided kernel foveated rendering can resolve the mutually conflicting goals of interactive rendering and perceptual realism

    An Information-Theoretic Approach to the Cost-benefit Analysis of Visualization in Virtual Environments

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Visualization and virtual environments (VEs) have been two interconnected parallel strands in visual computing for decades. Some VEs have been purposely developed for visualization applications, while many visualization applications are exemplary showcases in general-purpose VEs. Because of the development and operation costs of VEs, the majority of visualization applications in practice have yet to benefit from the capacity of VEs. In this paper, we examine this status quo from an information-theoretic perspective. Our objectives are to conduct cost-benefit analysis on typical VE systems (including augmented and mixed reality, theatre-based systems, and large powerwalls), to explain why some visualization applications benefit more from VEs than others, and to sketch out pathways for the future development of visualization applications in VEs. We support our theoretical propositions and analysis using theories and discoveries in the literature of cognitive sciences and the practical evidence reported in the literatures of visualization and VEs

    Exploratory Visualization of Astronomical Data on Ultra-high-resolution Wall Displays

    Get PDF
    International audienceUltra-high-resolution wall displays feature a very high pixel density over a large physical surface, which makes them well-suited to the collaborative, exploratory visualization of large datasets. We introduce FITS-OW, an application designed for such wall displays, that enables astronomers to navigate in large collections of FITS images, query astronomical databases, and display detailed, complementary data and documents about multiple sources simultaneously. We describe how astronomers interact with their data using both the wall's touch-sensitive surface and handheld devices. We also report on the technical challenges we addressed in terms of distributed graphics rendering and data sharing over the computer clusters that drive wall displays

    Innovative Diagnostic Tools for Ophthalmology in Low-Income Countries

    Get PDF
    Globally, there are almost 300 million people blind and visually impaired and over 90% live developing countries. The gross disparity in access to ophthalmologists limits the ability to accurately diagnose potentially blinding conditions like cataract, glaucoma, trachoma, uncorrected refractive error and limits timely initiation of medical and surgical treatment. Since 85% of blindness is preventable, bridging this chasm for care is even more critical in preventing needless blindness. Many low-income countries must rely on community health workers, physician assistants, and cataract surgeons for primary eye care. Ophthalmology in low-income countries (LIC) is further challenging due to complexities brought from tropical climates, frail electric grids, poor road and water infrastructure, limited diagnostic capability and limited treatment options. Vision 2020 set the goal of eliminating preventable blindness by 2020 despite formidable obstacles. Innovative technologies are emerging to test visual acuity, correct refractive error quickly and inexpensively, capture retinal images with portable tools, train cataract surgeons using simulators, capitalize on mHealth, access ophthalmic information remotely. These advancements are allowing nonspecialized ophthalmic practitioners to provide low-cost, high impact eye care in resource-limited regions around the world

    Perception-driven approaches to real-time remote immersive visualization

    Get PDF
    In remote immersive visualization systems, real-time 3D perception through RGB-D cameras, combined with modern Virtual Reality (VR) interfaces, enhances the user’s sense of presence in a remote scene through 3D reconstruction rendered in a remote immersive visualization system. Particularly, in situations when there is a need to visualize, explore and perform tasks in inaccessible environments, too hazardous or distant. However, a remote visualization system requires the entire pipeline from 3D data acquisition to VR rendering satisfies the speed, throughput, and high visual realism. Mainly when using point-cloud, there is a fundamental quality difference between the acquired data of the physical world and the displayed data because of network latency and throughput limitations that negatively impact the sense of presence and provoke cybersickness. This thesis presents state-of-the-art research to address these problems by taking the human visual system as inspiration, from sensor data acquisition to VR rendering. The human visual system does not have a uniform vision across the field of view; It has the sharpest visual acuity at the center of the field of view. The acuity falls off towards the periphery. The peripheral vision provides lower resolution to guide the eye movements so that the central vision visits all the interesting crucial parts. As a first contribution, the thesis developed remote visualization strategies that utilize the acuity fall-off to facilitate the processing, transmission, buffering, and rendering in VR of 3D reconstructed scenes while simultaneously reducing throughput requirements and latency. As a second contribution, the thesis looked into attentional mechanisms to select and draw user engagement to specific information from the dynamic spatio-temporal environment. It proposed a strategy to analyze the remote scene concerning the 3D structure of the scene, its layout, and the spatial, functional, and semantic relationships between objects in the scene. The strategy primarily focuses on analyzing the scene with models the human visual perception uses. It sets a more significant proportion of computational resources on objects of interest and creates a more realistic visualization. As a supplementary contribution, A new volumetric point-cloud density-based Peak Signal-to-Noise Ratio (PSNR) metric is proposed to evaluate the introduced techniques. An in-depth evaluation of the presented systems, comparative examination of the proposed point cloud metric, user studies, and experiments demonstrated that the methods introduced in this thesis are visually superior while significantly reducing latency and throughput

    An Augmentative Gaze Directing Framework for Multi-Spectral Imagery

    Get PDF
    Modern digital imaging techniques have made the task of imaging more prolic than ever and the volume of images and data available through multi-spectral imaging methods for exploitation is exceeding that which can be solely processed by human beings. The researchers proposed and developed a novel eye movement contingent framework and display system through adaption of the demonstrated technique of subtle gaze direction by presenting modulations within the displayed image. The system sought to augment visual search task performance of aerial imagery by incorporating multi-spectral image processing algorithms to determine potential regions of interest within an image. The exploratory work conducted was to study the feasibility of visual gaze direction with the specic intent of extending this application to geospatial image analysis without need for overt cueing to areas of potential interest and thereby maintaining the benefits of an undirected and unbiased search by an observer

    Development of a Powerwall-based solution for the manual flagging of radio astronomy data from eMerlin

    Get PDF
    This project was created with the intention of establishing an optimisation method for the manual flagging of interferometric data of the eMerlin radio astronomy array, using a Powerwall as a visualisation tool. The complexity of this process which is due to the amount of variables and parameters demands a deep understanding of the data treatment. Once the data is achieved by the antennas the signals are correlated. This process generates undesired signals which mostly coming from radio frequency interference. Also when the calibration is performed some values can mislead the expected outcome. Although the flagging is supported with algorithms this method is not one hundred percent accurate. That is why visual inspection is still required. The possibility to use a Powerwall as a visualisation system allows different and new dynamics in terms of the interaction of the analyst with the information required to make the flagging

    Digitization, innovation, and participation: digital conviviality of the Google Cultural Institute

    Get PDF
    2018 Summer.Includes bibliographical references.To view the abstract, please see the full text of the document
    corecore