37 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationVisualization and exploration of volumetric datasets has been an active area of research for over two decades. During this period, volumetric datasets used by domain users have evolved from univariate to multivariate. The volume datasets are typically explored and classified via transfer function design and visualized using direct volume rendering. To improve classification results and to enable the exploration of multivariate volume datasets, multivariate transfer functions emerge. In this dissertation, we describe our research on multivariate transfer function design. To improve the classification of univariate volumes, various one-dimensional (1D) or two-dimensional (2D) transfer function spaces have been proposed; however, these methods work on only some datasets. We propose a novel transfer function method that provides better classifications by combining different transfer function spaces. Methods have been proposed for exploring multivariate simulations; however, these approaches are not suitable for complex real-world datasets and may be unintuitive for domain users. To this end, we propose a method based on user-selected samples in the spatial domain to make complex multivariate volume data visualization more accessible for domain users. However, this method still requires users to fine-tune transfer functions in parameter space transfer function widgets, which may not be familiar to them. We therefore propose GuideME, a novel slice-guided semiautomatic multivariate volume exploration approach. GuideME provides the user, an easy-to-use, slice-based user interface that suggests the feature boundaries and allows the user to select features via click and drag, and then an optimal transfer function is automatically generated by optimizing a response function. Throughout the exploration process, the user does not need to interact with the parameter views at all. Finally, real-world multivariate volume datasets are also usually of large size, which is larger than the GPU memory and even the main memory of standard work stations. We propose a ray-guided out-of-core, interactive volume rendering and efficient query method to support large and complex multivariate volumes on standard work stations

    Interactive feature detection in volumetric data

    Full text link
    Im Rahmen dieser Dissertation wurden drei Techniken fĂŒr die interaktive Merkmalsdetektion in Volumendaten entwickelt. Das erste Verfahren auf Basis des LH-Transferfunktionsraumes ermöglicht es dem Benutzer, Objekt-OberflĂ€chen in einem Volumendatensatz durch direktes Markieren im gerenderten Bild zu identifizieren, wobei keine Interaktion im Datenraum des Volumens benötigt wird. Zweitens wird ein formbasiertes Klassifikationsverfahren vorgestellt, das ausgehend von einer groben Vorsegmentierung den Volumendatensatz in eine Menge von kleineren Regionen zerlegt, deren Form anschließend mit eigens entwickelten Klassifikatoren bestimmt wird. Drittens wird ein interaktives Volumen-Segmentierungsverfahren auf Basis des Random Walker-Algorithmus beschrieben, das speziell auf die Verringerung von Fehlklassifizierungen in der resultierenden Segmentierung abzielt. This dissertation presents three volumetric feature detection approaches that focus on an efficient interplay between user and system. The first technique exploits the LH transfer function space in order to enable the user to classify boundaries by directly marking them in the volume rendering image, without requiring interaction in the data domain. Second, we propose a shape-based feature detection approach that blurs the border between fast but limited classification and powerful but laborious segmentation techniques. Third, we present a guided probabilistic volume segmentation workflow that focuses on the minimization of uncertainty in the resulting segmentation. In an iterative process, the system continuously assesses uncertainty of an intermediate random walker-based segmentation in order to detect regions with high ambiguity, to which the user’s attention is directed to support the correction of potential segmentation errors

    Interactive visualization of large image collections

    Get PDF

    Pervasive Personal Information Spaces

    Get PDF
    Each user’s electronic information-interaction uniquely matches their information behaviour, activities and work context. In the ubiquitous computing environment, this information-interaction and the underlying personal information is distributed across multiple personal devices. This thesis investigates the idea of Pervasive Personal Information Spaces for improving ubiquitous personal information-interaction. Pervasive Personal Information Spaces integrate information distributed across multiple personal devices to support anytime-anywhere access to an individual’s information. This information is then visualised through context-based, flexible views that are personalised through user activities, diverse annotations and spontaneous information associations. The Spaces model embodies the characteristics of Pervasive Personal Information Spaces, which emphasise integration of the user’s information space, automation and communication, and flexible views. The model forms the basis for InfoMesh, an example implementation developed for desktops, laptops and PDAs. The design of the system was supported by a tool developed during the research called activity snaps that captures realistic user activity information for aiding the design and evaluation of interactive systems. User evaluation of InfoMesh elicited a positive response from participants for the ideas underlying Pervasive Personal Information Spaces, especially for carrying out work naturally and visualising, interpreting and retrieving information according to personalised contexts, associations and annotations. The user studies supported the research hypothesis, revealing that context-based flexible views may indeed provide better contextual, ubiquitous access and visualisation of information than current-day systems

    Exploring Sparse, Unstructured Video Collections of Places

    Get PDF
    The abundance of mobile devices and digital cameras with video capture makes it easy to obtain large collections of video clips that contain the same location, environment, or event. However, such an unstructured collection is difficult to comprehend and explore. We propose a system that analyses collections of unstructured but related video data to create a Videoscape: a data structure that enables interactive exploration of video collections by visually navigating — spatially and/or temporally — between different clips. We automatically identify transition opportunities, or portals. From these portals, we construct the Videoscape, a graph whose edges are video clips and whose nodes are portals between clips. Now structured, the videos can be interactively explored by walking the graph or by geographic map. Given this system, we gauge preference for different video transition styles in a user study, and generate heuristics that automatically choose an appropriate transition style. We evaluate our system using three further user studies, which allows us to conclude that Videoscapes provides significant benefits over related methods. Our system leads to previously unseen ways of interactive spatio-temporal exploration of casually captured videos, and we demonstrate this on several video collections

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Improving Collection Understanding for Web Archives with Storytelling: Shining Light Into Dark and Stormy Archives

    Get PDF
    Collections are the tools that people use to make sense of an ever-increasing number of archived web pages. As collections themselves grow, we need tools to make sense of them. Tools that work on the general web, like search engines, are not a good fit for these collections because search engines do not currently represent multiple document versions well. Web archive collections are vast, some containing hundreds of thousands of documents. Thousands of collections exist, many of which cover the same topic. Few collections include standardized metadata. Too many documents from too many collections with insufficient metadata makes collection understanding an expensive proposition. This dissertation establishes a five-process model to assist with web archive collection understanding. This model aims to produce a social media story – a visualization with which most web users are familiar. Each social media story contains surrogates which are summaries of individual documents. These surrogates, when presented together, summarize the topic of the story. After applying our storytelling model, they summarize the topic of a web archive collection. We develop and test a framework to select the best exemplars that represent a collection. We establish that algorithms produced from these primitives select exemplars that are otherwise undiscoverable using conventional search engine methods. We generate story metadata to improve the information scent of a story so users can understand it better. After an analysis showing that existing platforms perform poorly for web archives and a user study establishing the best surrogate type, we generate document metadata for the exemplars with machine learning. We then visualize the story and document metadata together and distribute it to satisfy the information needs of multiple personas who benefit from our model. Our tools serve as a reference implementation of our Dark and Stormy Archives storytelling model. Hypercane selects exemplars and generates story metadata. MementoEmbed generates document metadata. Raintale visualizes and distributes the story based on the story metadata and the document metadata of these exemplars. By providing understanding immediately, our stories save users the time and effort of reading thousands of documents and, most importantly, help them understand web archive collections

    Cognitive Foundations for Visual Analytics

    Get PDF
    In this report, we provide an overview of scientific/technical literature on information visualization and VA. Topics discussed include an update and overview of the extensive literature search conducted for this study, the nature and purpose of the field, major research thrusts, and scientific foundations. We review methodologies for evaluating and measuring the impact of VA technologies as well as taxonomies that have been proposed for various purposes to support the VA community. A cognitive science perspective underlies each of these discussions

    Visual analytics methods for retinal layers in optical coherence tomography data

    Get PDF
    Optical coherence tomography is an important imaging technology for the early detection of ocular diseases. Yet, identifying substructural defects in the 3D retinal images is challenging. We therefore present novel visual analytics methods for the exploration of small and localized retinal alterations. Our methods reduce the data complexity and ensure the visibility of relevant information. The results of two cross-sectional studies show that our methods improve the detection of retinal defects, contributing to a deeper understanding of the retinal condition at an early stage of disease.Die optische KohĂ€renztomographie ist ein wichtiges Bildgebungsverfahren zur FrĂŒherkennung von Augenerkrankungen. Die Identifizierung von substrukturellen Defekten in den 3D-Netzhautbildern ist jedoch eine Herausforderung. Wir stellen daher neue Visual-Analytics-Methoden zur Exploration von kleinen und lokalen NetzhautverĂ€nderungen vor. Unsere Methoden reduzieren die DatenkomplexitĂ€t und gewĂ€hrleisten die Sichtbarkeit relevanter Informationen. Die Ergebnisse zweier Querschnittsstudien zeigen, dass unsere Methoden die Erkennung von Netzhautdefekten in frĂŒhen Krankheitsstadien verbessern

    Explicit design of transfer functions for volume-rendered images by combining histograms, thumbnails, and sketch-based interaction

    No full text
    Visual quality of volume rendering for medical imagery strongly depends on the underlying transfer function. Conventional Windows–Icons–Menus–Pointer interfaces typically refer the user to browse a lengthy catalog of predefined transfer functions or to pain-staking refine the transfer function by clicking and dragging several independent handles. To turn the standard design process less difficult and tedious, this paper proposes novel interactions on a sketch-based interface that supports the design of 1D transfer functions via touch gestures to directly control voxel opacity and easily assign colors. User can select different types of transfer function shapes including ramp function, free hand curve drawing, and slider bars similar to those of a mixing table. An assorted array of thumbnails provides an overview of the data when editing the transfer function. User performance is evaluated by comparing the time and effort necessary to complete a number of tests with sketch-based and conventional interfaces. Users were able to more rapidly explore and understand volume data using the sketch-based interface, as the number of design iterations necessary to obtain a desirable transfer function was reduced. In addition, informal evaluation sessions carried out with professionals (two senior radiologists, a general surgeon and two scientific illustrators) provided valuable feedback on how suitable the sketch-based interface is for illustration, patient communication and medical education.info:eu-repo/semantics/publishedVersio
    corecore