17 research outputs found

    Automatic interpretation of map visualizations with color-encoded scalar values from bitmap images

    Get PDF
    Las visualizaciones de mapas son usadas en diferentes a 鈥榬eas para mostrar datos geogr谩ficos (por ejemplo, datos climatol贸gicos u oceanogr谩ficos, resultados de an谩lisis empresariales, entre otros). Estas visualizaciones se pueden encontrar en art铆culos de noticias, art铆culos cient铆ficos y en la Web; sin embargo, muchas de ellas est谩n disponibles como im谩genes en mapa de bits, lo que dificulta que el computador interprete los datos visualizados para su indexaci贸n y reutilizaci贸n. En este trabajo proponemos una secuencia de pasos para recuperar la codificaci贸n visual a partir de im谩genes en mapa de bits de mapas geogr谩ficos que utilizan el color para codificar los valores de los datos. Nuestros resultados fueron analizados usando mapas extra铆dos de documentos cient铆ficos, logrando una alta precisi贸n en cada paso propuesto. Adicionalmente presentamos a iGeoMap, nuestro sistema web que utiliza la codificaci贸n visual extra铆da para permitir la interacci贸n del usuario sobre im谩genes en mapa de bits de visualizaciones de mapas.Trabajo de investigaci贸

    Towards automated infographic design: Deep learning-based auto-extraction of extensible timeline

    Get PDF
    Designers need to consider not only perceptual effectiveness but also visual styles when creating an infographic. This process can be difficult and time consuming for professional designers, not to mention non-expert users, leading to the demand for automated infographics design. As a first step, we focus on timeline infographics, which have been widely used for centuries. We contribute an end-to-end approach that automatically extracts an extensible timeline template from a bitmap image. Our approach adopts a deconstruction and reconstruction paradigm. At the deconstruction stage, we propose a multi-task deep neural network that simultaneously parses two kinds of information from a bitmap timeline: 1) the global information, i.e., the representation, scale, layout, and orientation of the timeline, and 2) the local information, i.e., the location, category, and pixels of each visual element on the timeline. At the reconstruction stage, we propose a pipeline with three techniques, i.e., Non-Maximum Merging, Redundancy Recover, and DL GrabCut, to extract an extensible template from the infographic, by utilizing the deconstruction results. To evaluate the effectiveness of our approach, we synthesize a timeline dataset (4296 images) and collect a real-world timeline dataset (393 images) from the Internet. We first report quantitative evaluation results of our approach over the two datasets. Then, we present examples of automatically extracted templates and timelines automatically generated based on these templates to qualitatively demonstrate the performance. The results confirm that our approach can effectively extract extensible templates from real-world timeline infographics.Comment: 10 pages, Automated Infographic Design, Deep Learning-based Approach, Timeline Infographics, Multi-task Mode

    Searching the Visual Style and Structure of D3 Visualizations

    Full text link
    We present a search engine for D3 visualizations that allows queries based on their visual style and underlying structure. To build the engine we crawl a collection of 7860 D3 visualizations from the Web and deconstruct each one to recover its data, its data-encoding marks and the encodings describing how the data is mapped to visual attributes of the marks. We also extract axes and other non-data-encoding attributes of marks (e.g., typeface, background color). Our search engine indexes this style and structure information as well as metadata about the webpage containing the chart. We show how visualization developers can search the collection to find visualizations that exhibit specific design characteristics and thereby explore the space of possible designs. We also demonstrate how researchers can use the search engine to identify commonly used visual design patterns and we perform such a demographic design analysis across our collection of D3 charts. A user study reveals that visualization developers found our style and structure based search engine to be significantly more useful and satisfying for finding different designs of D3 charts, than a baseline search engine that only allows keyword search over the webpage containing a chart

    KB4VA: A Knowledge Base of Visualization Designs for Visual Analytics

    Full text link
    Visual analytics (VA) systems have been widely used to facilitate decision-making and analytical reasoning in various application domains. VA involves visual designs, interaction designs, and data mining, which is a systematic and complex paradigm. In this work, we focus on the design of effective visualizations for complex data and analytical tasks, which is a critical step in designing a VA system. This step is challenging because it requires extensive knowledge about domain problems and visualization to design effective encodings. Existing visualization designs published in top venues are valuable resources to inspire designs for problems with similar data structures and tasks. However, those designs are hard to understand, parse, and retrieve due to the lack of specifications. To address this problem, we build KB4VA, a knowledge base of visualization designs in VA systems with comprehensive labels about their analytical tasks and visual encodings. Our labeling scheme is inspired by a workshop study with 12 VA researchers to learn user requirements in understanding and retrieving professional visualization designs in VA systems. The theme extends Vega-Lite specifications for describing advanced and composited visualization designs in a declarative manner, thus facilitating human understanding and automatic indexing. To demonstrate the usefulness of our knowledge base, we present a user study about design inspirations for VA tasks. In summary, our work opens new perspectives for enhancing the accessibility and reusability of professional visualization designs

    A Survey on ML4VIS: Applying Machine Learning Advances to Data Visualization

    Full text link
    Inspired by the great success of machine learning (ML), researchers have applied ML techniques to visualizations to achieve a better design, development, and evaluation of visualizations. This branch of studies, known as ML4VIS, is gaining increasing research attention in recent years. To successfully adapt ML techniques for visualizations, a structured understanding of the integration of ML4VISis needed. In this paper, we systematically survey 88 ML4VIS studies, aiming to answer two motivating questions: "what visualization processes can be assisted by ML?" and "how ML techniques can be used to solve visualization problems?" This survey reveals seven main processes where the employment of ML techniques can benefit visualizations:Data Processing4VIS, Data-VIS Mapping, InsightCommunication, Style Imitation, VIS Interaction, VIS Reading, and User Profiling. The seven processes are related to existing visualization theoretical models in an ML4VIS pipeline, aiming to illuminate the role of ML-assisted visualization in general visualizations.Meanwhile, the seven processes are mapped into main learning tasks in ML to align the capabilities of ML with the needs in visualization. Current practices and future opportunities of ML4VIS are discussed in the context of the ML4VIS pipeline and the ML-VIS mapping. While more studies are still needed in the area of ML4VIS, we hope this paper can provide a stepping-stone for future exploration. A web-based interactive browser of this survey is available at https://ml4vis.github.ioComment: 19 pages, 12 figures, 4 table

    InvVis: Large-Scale Data Embedding for Invertible Visualization

    Full text link
    We present InvVis, a new approach for invertible visualization, which is reconstructing or further modifying a visualization from an image. InvVis allows the embedding of a significant amount of data, such as chart data, chart information, source code, etc., into visualization images. The encoded image is perceptually indistinguishable from the original one. We propose a new method to efficiently express chart data in the form of images, enabling large-capacity data embedding. We also outline a model based on the invertible neural network to achieve high-quality data concealing and revealing. We explore and implement a variety of application scenarios of InvVis. Additionally, we conduct a series of evaluation experiments to assess our method from multiple perspectives, including data embedding quality, data restoration accuracy, data encoding capacity, etc. The result of our experiments demonstrates the great potential of InvVis in invertible visualization.Comment: IEEE VIS 202

    SeeChart: Enabling Accessible Visualizations Through Interactive Natural Language Interface For People with Visual Impairments

    Full text link
    Web-based data visualizations have become very popular for exploring data and communicating insights. Newspapers, journals, and reports regularly publish visualizations to tell compelling stories with data. Unfortunately, most visualizations are inaccessible to readers with visual impairments. For many charts on the web, there are no accompanying alternative (alt) texts, and even if such texts exist they do not adequately describe important insights from charts. To address the problem, we first interviewed 15 blind users to understand their challenges and requirements for reading data visualizations. Based on the insights from these interviews, we developed SeeChart, an interactive tool that automatically deconstructs charts from web pages and then converts them to accessible visualizations for blind people by enabling them to hear the chart summary as well as to interact through data points using the keyboard. Our evaluation with 14 blind participants suggests the efficacy of SeeChart in understanding key insights from charts and fulfilling their information needs while reducing their required time and cognitive burden.Comment: 28 pages, 13 figure
    corecore