9 research outputs found

    From Data to Knowledge Graphs: A Multi-Layered Method to Model User's Visual Analytics Workflow for Analytical Purposes

    Full text link
    The importance of knowledge generation drives much of Visual Analytics (VA). User-tracking and behavior graphs have shown the value of understanding users' knowledge generation while performing VA workflows. Works in theoretical models, ontologies, and provenance analysis have greatly described means to structure and understand the connection between knowledge generation and VA workflows. Yet, two concepts are typically intermixed: the temporal aspect, which indicates sequences of events, and the atemporal aspect, which indicates the workflow state space. In works where these concepts are separated, they do not discuss how to analyze the recorded user's knowledge gathering process when compared to the VA workflow itself. This paper presents Visual Analytic Knowledge Graph (VAKG), a conceptual framework that generalizes existing knowledge models and ontologies by focusing on how humans relate to computer processes temporally and how it relates to the workflow's state space. Our proposal structures this relationship as a 4-way temporal knowledge graph with specific emphasis on modeling the human and computer aspect of VA as separate but interconnected graphs for, among others, analytical purposes. We compare VAKG with relevant literature to show that VAKG's contribution allows VA applications to use it as a provenance model and a state space graph, allowing for analytics of domain-specific processes, usage patterns, and users' knowledge gain performance. We also interviewed two domain experts to check, in the wild, whether real practice and our contributions are aligned.Comment: 9 pgs, submitted to VIS 202

    Explainable Patterns: Going from Findings to Insights to Support Data Analytics Democratization

    Full text link
    In the past decades, massive efforts involving companies, non-profit organizations, governments, and others have been put into supporting the concept of data democratization, promoting initiatives to educate people to confront information with data. Although this represents one of the most critical advances in our free world, access to data without concrete facts to check or the lack of an expert to help on understanding the existing patterns hampers its intrinsic value and lessens its democratization. So the benefits of giving full access to data will only be impactful if we go a step further and support the Data Analytics Democratization, assisting users in transforming findings into insights without the need of domain experts to promote unconstrained access to data interpretation and verification. In this paper, we present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings, automatically generating plausible explanations for observed or selected findings using an external (textual) source of information, avoiding or reducing the need for domain experts. ExPatt applicability is confirmed via different use-cases involving world demographics indicators and Wikipedia as an external source of explanations, showing how it can be used in practice towards the data analytics democratization.Comment: 8 Figures, 10 pages, submitted to VIS 202

    christinoleo/KDs: 0.0.2

    No full text
    No description provided

    GPU Acceleration of robotic systems services focused in real-time processing of 3D point clouds

    No full text
    O projeto de mestrado, denominado de forma abreviada como GPUServices, se insere no contexto da pesquisa e do desenvolvimento de métodos de processamento de dados de sensores tridimensionais aplicados a robótica móvel. Tais métodos serão chamados de serviços neste projeto e incluem algoritmos de pré-processamento de nuvens de pontos 3D com segmentação dos dados, a separação e identificação de zonas planares (chão, vias), e detecção de elementos de interesse (bordas, obstáculos). Devido à grande quantidade de dados a serem tratados em um curto espaço de tempo, esses serviços utilizam processamento paralelo por GPU para realizar o processamento parcial ou completo destes dados. A área de aplicação em foco neste projeto visa prover serviços para um sistema ADAS: veículos autônomos e inteligentes, forçando-os a se aproximarem de um sistema de processamento em tempo real devido ao contexto de direção autônoma. Os serviços são divididos em etapas de acordo com a metodologia do projeto, mas sempre buscando a aceleração com o uso de paralelismo inerente: O pré-projeto consiste de organizar um ambiente que seja capaz de coordenar todas as tecnologias utilizadas e que explore o paralelismo; O primeiro serviço tem a responsabilidade de extrair inteligentemente os dados do sensor que foi usado pelo projeto (Sensor laser Velodyne de múltiplos feixes), que se mostra necessário devido à diversos erros de leitura e ao formato de recebimento, fornecendo os dados em uma estrutura matricial; O segundo serviço em cooperação com o anterior corrige a desestabilidade espacial do sensor devido à base de fixação não estar perfeitamente paralela ao chão e devido aos amortecimentos do veículo; O terceiro serviço separa as zonas semânticas do ambiente, como plano do chão, regiões abaixo e acima do chão; O quarto serviço, similar ao anterior, realiza uma pré-segmentação das guias da rua; O quinto serviço realiza uma segmentação de objetos do ambiente, separando-os em blobs; E o sexto serviço utiliza de todos os anteriores para a detecção e segmentação das guias da rua. Os dados recebidos pelo sensor são na forma de uma nuvem de pontos 3D com grande potencial de exploração do paralelismo baseado na localidade das informações. Porém, sua grande dificuldade é a grande taxa de dados recebidos do sensor (em torno de 700.000 pontos/seg.), sendo esta a motivação deste projeto: usar todo o potencial do sensor de forma eficiente ao usar o paralelismo de programação GPU, disponibilizando assim ao usuário serviços de tratamento destes dados.The master\'s project, abbreviated hence forth as GPUServices, fits in the context of research and development of three-dimensional sensor data processing methods applied to mobile robotics. Such methods will be called services in this project, which include a 3D point cloud preprocessing algorithms with data segmentation, separation and identification of planar areas (ground track), and also detecting elements of interest (borders, barriers). Due to the large amount of data to be processed in a short time, these services should use parallel processing, using the GPU to perform partial or complete processing of these data. The application area in focus in this project aims to provide services for an ADAS system: autonomous and intelligent vehicles, forcing them to get close to a real-time processing system due to the autonomous direction of context.The services are divided into stages according to the project methodology, but always striving for acceleration using inherent parallelism: The pre-project consists of organizing an environment for development that is able to coordinate all used technologies, to exploit parallelism and to be integrated to the system already used by the autonomous car; The first service has a responsibility to intelligently extract sensor data that will be used by the project (Laser sensor Velodyne multi-beam), it appears necessary because of the many reading errors and the receiving data format, hence providing data in a matrix structure; The second service, in cooperation with the above, corrects the spatial destabilization due to the sensor fixing base not perfectly parallel to the ground and due to the damping of the vehicle; The third service separates the environment into semantics areas such as ground plane and regions below and above the ground; The fourth service, similar to the above, performs a pre-segmentation of street cruds; The fifth service performs an environmental objects segmentation, separating them into blobs; The sixth service uses all prior to detection and segmentation of street guides.The received sensor data is structured in the form of a cloud of points. They allow processing with great potential for exploitation of parallelism based on the location of the information. However, its major difficulty is the high rate of data received from the sensor (around 700,000 points/sec), and this gives the motivation of this project: to use the full potential of sensor to efficiently use the parallelism of GPU programming, therefore providing data processing services to the user, providing services that helps and make the implementation of ADAS systems easier and/or faster

    From Data to Knowledge Graphs: A Multi-Layered Method to Model User's Visual Analytics Workflow for Analytical Purposes

    No full text
    The primary goal of Visual Analytics (VA) is knowledge generation. In this process, VA knowledge models and ontologies have shown to be beneficial to better understand how users obtain new insights when executing a VA workflow. Yet, the gap between theoretical models and the practice of knowledge generation analysis is wide, and theory has mainly been used as a baseline for practical works. Also, two concepts are typically ambiguous and intermixed when analyzing VA workflows: the temporal aspect, which indicates sequences of events, and the atemporal aspect, which indicates the workflow's state-space, which is the set of all states of the VA tool and its user occupied during a VA workflow. Also, the lack of guidelines on how to analyze the recorded user's knowledge-gathering process when compared to the VA workflow itself is apparent. We bridge this gap by presenting Visual Analytics Knowledge Graph (VAKG), a conceptual framework to bridge the gap between VA workflow modeling theory and application. Through a novel Set-Theory formalization of knowledge modeling, VAKG structures a VA workflow by temporal sequences of human and machine changes over time and how they relate to the workflow's state-space. This structure is then used as a schema for storing VA workflow data and can be used to analyze user behavior and knowledge generation. VAKG is designed following the needs and limitations of relevant literature, allowing for modeling, structuring, storing, and providing analysis guidelines for user behavior and knowledge generation, enabling comparison of users and VA tools

    Knowledge-Decks: Automatically Generating Presentation Slide Decks of Visual Analytics Knowledge Discovery Applications

    No full text
    Visual Analytics (VA) tools provide ways for users to harness insights and knowledge from datasets. Recalling and retelling user experiences while utilizing VA tools has attracted significant interest. Nevertheless, each user sessions are unique. Even when different users have the same intention when using a VA tool, they may follow different paths and uncover different insights. Current methods of manually processing such data to recall and retell users' knowledge discovery paths may also be time-consuming, especially when there is the need to present users' findings to third parties. This paper presents a novel system that collects user intentions, behavior, and insights during knowledge discovery sessions, automatically structure the data, and extracts narrations of knowledge discovery as PowerPoint slide decks. The system is powered by a Knowledge Graph designed based on a formal and reproducible modeling process. To evaluate our system, we have attached it to two existing VA tools where users were asked to perform pre-defined tasks. Several slide decks and other analysis metrics were extracted from the generated Knowledge Graph. Experts scrutinized and confirmed the usefulness of our automated process for using the slide decks to disclose knowledge discovery paths to others and to verify whether the VA tools themselves were effective

    ARMatrix: An Interactive Item-to-Rule Matrix for Association Rules Visual Analytics

    No full text
    Amongst the data mining techniques for exploratory analysis, association rule mining is a popular strategy given its ability to find causal rules between items to express regularities in a database. With large datasets, many rules can be generated, and visualization has shown to be instrumental in such scenarios. Despite the relative success, existing visual representations are limited and suffer from analytical capability and low interactive support issues. This paper presents ARMatrix, a visual analytics framework for the analysis of association rules based on an interactive item-to-rule matrix metaphor which aims to help users to navigate sets of rules and get insights about co-occurrence patterns. The usability of the proposed framework is illustrated using two user scenarios and then confirmed from the feedback received through a user test with 20 participants

    DimenFix: A novel meta-dimensionality reduction method for feature preservation

    Get PDF
    Dimensionality reduction has become an important research topic as demand for interpreting high-dimensional datasets has been increasing rapidly in recent years. There have been many dimensionality reduction methods with good performance in preserving the overall relationship among data points when mapping them to a lower-dimensional space. However, these existing methods fail to incorporate the difference in importance among features. To address this problem, we propose a novel meta-method, DimenFix, which can be operated upon any base dimensionality reduction method that involves a gradient-descent-like process. By allowing users to define the importance of different features, which is considered in dimensionality reduction, DimenFix creates new possibilities to visualize and understand a given dataset. Meanwhile, DimenFix does not increase the time cost or reduce the quality of dimensionality reduction with respect to the base dimensionality reduction used

    visml/neo_force_scheme: Eurovis Submission

    No full text
    NeoForceSceme, an extention of the original ForceScheme with performance improvements using Numba and CUD
    corecore