1,106 research outputs found

    On intelligible multimodal visual analysis

    Get PDF
    Analyzing data becomes an important skill in a more and more digital world. Yet, many users are facing knowledge barriers preventing them to independently conduct their data analysis. To tear down some of these barriers, multimodal interaction for visual analysis has been proposed. Multimodal interaction through speech and touch enables not only experts, but also novice users to effortlessly interact with such kind of technology. However, current approaches do not take the user differences into account. In fact, whether visual analysis is intelligible ultimately depends on the user. In order to close this research gap, this dissertation explores how multimodal visual analysis can be personalized. To do so, it takes a holistic view. First, an intelligible task space of visual analysis tasks is defined by considering personalization potentials. This task space provides an initial basis for understanding how effective personalization in visual analysis can be approached. Second, empirical analyses on speech commands in visual analysis as well as used visualizations from scientific publications further reveal patterns and structures. These behavior-indicated findings help to better understand expectations towards multimodal visual analysis. Third, a technical prototype is designed considering the previous findings. Enriching the visual analysis by a persistent dialogue and a transparency of the underlying computations, conducted user studies show not only advantages, but address the relevance of considering the user’s characteristics. Finally, both communications channels – visualizations and dialogue – are personalized. Leveraging linguistic theory and reinforcement learning, the results highlight a positive effect of adjusting to the user. Especially when the user’s knowledge is exceeded, personalizations helps to improve the user experience. Overall, this dissertations confirms not only the importance of considering the user’s characteristics in multimodal visual analysis, but also provides insights on how an intelligible analysis can be achieved. By understanding the use of input modalities, a system can focus only on the user’s needs. By understanding preferences on the output modalities, the system can better adapt to the user. Combining both directions imporves user experience and contributes towards an intelligible multimodal visual analysis

    An energy-based model to optimize cluster visualization

    Get PDF
    National audienceGraphs are mathematical structures that provide natural means for complex-data representation. Graphs capture the structure and thus help modeling a wide range of complex real-life data in various domains. Moreover graphs are especially suitable for information visualization. Indeed the intuitive visualabstraction (dots and lines) they provide is intimately associated with graphs. Visualization paves the way to interactive exploratory data-analysis and to important goals such as identifying groups and subgroups among data and helping to understand how these groups interact with each other. In this paper, we present a graph drawing approach that helps to better appreciate the cluster structure in data and the interactions that may exist between clusters. In this work, we assume that the clusters are already extracted and focus rather on the visualization aspects. We propose an energy-based model for graph drawing that produces an esthetic drawing that ensures each cluster will occupy a separate zone within thevisualization layout. This method emphasizes the inter-groups interactions and still shows the inter-nodes interactions. The drawing areas assigned to the clusters can be user-specified (prefixed areas) or automatically crafted (free areas). The approach we suggest also enables handling geographically-based clustering. In the case of free areas, we illustrate the use of our drawing method through an example. In the case of prefixed areas, we firstuse an example from citation networks and then use another exampleto compare the results of our method to those of the divide and conquer approach. In the latter case, we show that while the two methods successfully point out the cluster structure our method better visualize the global structure

    A visual analytics approach for understanding biclustering results from microarray data

    Get PDF
    Abstract Background Microarray analysis is an important area of bioinformatics. In the last few years, biclustering has become one of the most popular methods for classifying data from microarrays. Although biclustering can be used in any kind of classification problem, nowadays it is mostly used for microarray data classification. A large number of biclustering algorithms have been developed over the years, however little effort has been devoted to the representation of the results. Results We present an interactive framework that helps to infer differences or similarities between biclustering results, to unravel trends and to highlight robust groupings of genes and conditions. These linked representations of biclusters can complement biological analysis and reduce the time spent by specialists on interpreting the results. Within the framework, besides other standard representations, a visualization technique is presented which is based on a force-directed graph where biclusters are represented as flexible overlapped groups of genes and conditions. This microarray analysis framework (BicOverlapper), is available at http://vis.usal.es/bicoverlapper Conclusion The main visualization technique, tested with different biclustering results on a real dataset, allows researchers to extract interesting features of the biclustering results, especially the highlighting of overlapping zones that usually represent robust groups of genes and/or conditions. The visual analytics methodology will permit biology experts to study biclustering results without inspecting an overwhelming number of biclusters individually.</p

    ConceptScope: Organizing and Visualizing Knowledge in Documents based on Domain Ontology

    Full text link
    Current text visualization techniques typically provide overviews of document content and structure using intrinsic properties such as term frequencies, co-occurrences, and sentence structures. Such visualizations lack conceptual overviews incorporating domain-relevant knowledge, needed when examining documents such as research articles or technical reports. To address this shortcoming, we present ConceptScope, a technique that utilizes a domain ontology to represent the conceptual relationships in a document in the form of a Bubble Treemap visualization. Multiple coordinated views of document structure and concept hierarchy with text overviews further aid document analysis. ConceptScope facilitates exploration and comparison of single and multiple documents respectively. We demonstrate ConceptScope by visualizing research articles and transcripts of technical presentations in computer science. In a comparative study with DocuBurst, a popular document visualization tool, ConceptScope was found to be more informative in exploring and comparing domain-specific documents, but less so when it came to documents that spanned multiple disciplines.Comment: 19 pages, 5 figure

    Visual Similarity Perception of Directed Acyclic Graphs: A Study on Influencing Factors

    Full text link
    While visual comparison of directed acyclic graphs (DAGs) is commonly encountered in various disciplines (e.g., finance, biology), knowledge about humans' perception of graph similarity is currently quite limited. By graph similarity perception we mean how humans perceive commonalities and differences in graphs and herewith come to a similarity judgment. As a step toward filling this gap the study reported in this paper strives to identify factors which influence the similarity perception of DAGs. In particular, we conducted a card-sorting study employing a qualitative and quantitative analysis approach to identify 1) groups of DAGs that are perceived as similar by the participants and 2) the reasons behind their choice of groups. Our results suggest that similarity is mainly influenced by the number of levels, the number of nodes on a level, and the overall shape of the graph.Comment: Graph Drawing 2017 - arXiv Version; Keywords: Graphs, Perception, Similarity, Comparison, Visualizatio

    Taxonomy Visualization in Support of the Semi-Automatic Validation and Optimization of Organizational Schemas

    Get PDF
    Never before in history, mankind had access to and produced so much data, information, knowledge, and expertise as today. To organize, access, and manage these highly valuable assets effectively, we use taxonomies, classification hierarchies, ontologies, and controlled vocabularies among others. We create directory structures for our files. We use organizational hierarchies to structure our work environment. However, the design and continuous update of these organizational schemas that potentially have thousands of class nodes to organize millions of entities is challenging for any human being. The Taxonomy Visualization and Validation (TV) tool introduced in this paper supports the semi-automatic validation and optimization of organizational schemas such as file directories, classification hierarchies, taxonomies, or any other structure imposed on a data set as a means of organization, structuring, and naming. By showing the “goodness of fit” of a schema and the potentially millions of entities it organizes, the TV eases the identification and reclassification of misclassified information entities, the identification of classes that grew over-proportionally, the evaluation of the size and homogeneity of existing classes, the examination of the “well-formedness” of an organizational schema, etc. The TV is exemplarily applied to display the United States Patent and Trademark Office patent classification, which organizes more than three million patents into about 160,000 distinct patent classes. The paper concludes with a discussion and an outlook to future work
    • …
    corecore