48 research outputs found

    NeuralSI: Neural Design of Semantic Interaction for Interactive Deep Learning

    Full text link
    An increasing number of studies have utilized interactive deep learning as the analytic model of visual analytics systems for complex sensemaking tasks. In these systems, traditional interactive dimensionality reduction (DR) models are commonly utilized to build a bi-directional bridge between high-dimensional deep learning representations and low-dimensional visualizations. While these systems better capture analysts' intents in the context of human-in-the-loop interactive deep learning, traditional DR cannot support several desired properties for visual analytics, including out-of-sample extensions, stability, and real-time inference. To avoid this issue, we propose the neural design framework of semantic interaction for interactive deep learning. In our framework, we replace the traditional DR with a neural projection network and append it to the deep learning model as the task-specific output layer. Therefore, the analytic model (deep learning) and visualization method (interactive DR) form one integrated end-to-end trainable deep neural network. In order to understand the performance of the neural design in comparison to the state-of-the-art, we systematically performed two complementary studies, a human-centered qualitative case study and an algorithm-centered simulation-based quantitative experiment. The results of these studies indicate that the neural design can give semantic interaction systems substantial advantages while still keeping comparable inference ability compared to the state-of-the-art model.Comment: 19 pages, 9 figure

    DeepSI: Interactive Deep Learning for Semantic Interaction

    Full text link
    In this paper, we design novel interactive deep learning methods to improve semantic interactions in visual analytics applications. The ability of semantic interaction to infer analysts' precise intents during sensemaking is dependent on the quality of the underlying data representation. We propose the DeepSIfinetune\text{DeepSI}_{\text{finetune}} framework that integrates deep learning into the human-in-the-loop interactive sensemaking pipeline, with two important properties. First, deep learning extracts meaningful representations from raw data, which improves semantic interaction inference. Second, semantic interactions are exploited to fine-tune the deep learning representations, which then further improves semantic interaction inference. This feedback loop between human interaction and deep learning enables efficient learning of user- and task-specific representations. To evaluate the advantage of embedding the deep learning within the semantic interaction loop, we compare DeepSIfinetune\text{DeepSI}_{\text{finetune}} against a state-of-the-art but more basic use of deep learning as only a feature extractor pre-processed outside of the interactive loop. Results of two complementary studies, a human-centered qualitative case study and an algorithm-centered simulation-based quantitative experiment, show that DeepSIfinetune\text{DeepSI}_{\text{finetune}} more accurately captures users' complex mental models with fewer interactions

    Visualising the structure of document search results: A comparison of graph theoretic approaches

    Get PDF
    This is the post-print of the article - Copyright @ 2010 Sage PublicationsPrevious work has shown that distance-similarity visualisation or ‘spatialisation’ can provide a potentially useful context in which to browse the results of a query search, enabling the user to adopt a simple local foraging or ‘cluster growing’ strategy to navigate through the retrieved document set. However, faithfully mapping feature-space models to visual space can be problematic owing to their inherent high dimensionality and non-linearity. Conventional linear approaches to dimension reduction tend to fail at this kind of task, sacrificing local structural in order to preserve a globally optimal mapping. In this paper the clustering performance of a recently proposed algorithm called isometric feature mapping (Isomap), which deals with non-linearity by transforming dissimilarities into geodesic distances, is compared to that of non-metric multidimensional scaling (MDS). Various graph pruning methods, for geodesic distance estimation, are also compared. Results show that Isomap is significantly better at preserving local structural detail than MDS, suggesting it is better suited to cluster growing and other semantic navigation tasks. Moreover, it is shown that applying a minimum-cost graph pruning criterion can provide a parameter-free alternative to the traditional K-neighbour method, resulting in spatial clustering that is equivalent to or better than that achieved using an optimal-K criterion

    Exploratory visual text analytics in the scientific literature domain

    Get PDF

    Producing and editing diagrams using co-speech gesture: Spatializing non-spatial relations in explanations of kinship in Laos

    No full text
    This article presents a description of two sequences of talk by urban speakers of Lao (a southwestern Tai language spoken in Laos) in which co-speech gesture plays a central role in explanations of kinship relations and terminology. The speakers spontaneously use hand gestures and gaze to spatially diagram relationships that have no inherent spatial structure. The descriptive sections of the article are prefaced by a discussion of the semiotic complexity of illustrative gestures and gesture diagrams. Gestured signals feature iconic, indexical, and symbolic components, usually in combination, as well as using motion and three-dimensional space to convey meaning. Such diagrams show temporal persistence and structural integrity despite having been projected in midair by evanescent signals (i.e., handmovements anddirected gaze). Speakers sometimes need or want to revise these spatial representations without destroying their structural integrity. The need to "edit" gesture diagrams involves such techniques as hold-and-drag, hold-and-work-with-free-hand, reassignment-of-old-chunk-tonew-chunk, and move-body-into-new-space

    The state of the art in integrating machine learning into visual analytics

    Get PDF
    Visual analytics systems combine machine learning or other analytic techniques with interactive data visualization to promote sensemaking and analytical reasoning. It is through such techniques that people can make sense of large, complex data. While progress has been made, the tactful combination of machine learning and data visualization is still under-explored. This state-of-the-art report presents a summary of the progress that has been made by highlighting and synthesizing select research advances. Further, it presents opportunities and challenges to enhance the synergy between machine learning and visual analytics for impactful future research directions

    Spatial Formats under the Global Condition

    Get PDF
    Contributions to this volume summarize and discuss the theoretical foundations of the Collaborative Research Centre at Leipzig University which address the relationship between processes of (re-)spatialization on the one hand and the establishment and characteristics of spatial formats on the other hand

    Doctor of Philosophy

    Get PDF
    dissertationWith the ever-increasing amount of available computing resources and sensing devices, a wide variety of high-dimensional datasets are being produced in numerous fields. The complexity and increasing popularity of these data have led to new challenges and opportunities in visualization. Since most display devices are limited to communication through two-dimensional (2D) images, many visualization methods rely on 2D projections to express high-dimensional information. Such a reduction of dimension leads to an explosion in the number of 2D representations required to visualize high-dimensional spaces, each giving a glimpse of the high-dimensional information. As a result, one of the most important challenges in visualizing high-dimensional datasets is the automatic filtration and summarization of the large exploration space consisting of all 2D projections. In this dissertation, a new type of algorithm is introduced to reduce the exploration space that identifies a small set of projections that capture the intrinsic structure of high-dimensional data. In addition, a general framework for summarizing the structure of quality measures in the space of all linear 2D projections is presented. However, identifying the representative or informative projections is only part of the challenge. Due to the high-dimensional nature of these datasets, obtaining insights and arriving at conclusions based solely on 2D representations are limited and prone to error. How to interpret the inaccuracies and resolve the ambiguity in the 2D projections is the other half of the puzzle. This dissertation introduces projection distortion error measures and interactive manipulation schemes that allow the understanding of high-dimensional structures via data manipulation in 2D projections
    corecore