32 research outputs found

    Data-Driven Approach to Human-Engaged Computing

    Get PDF
    This paper presents an overview of the research landscape of datadriven human-engaged computing in the Human-Computer Interaction Initiative at the Hong Kong University of Science and Technology

    GG-Mapper: Learning a Cover in the Mapper Construction

    Full text link
    The Mapper algorithm is a visualization technique in topological data analysis (TDA) that outputs a graph reflecting the structure of a given dataset. The Mapper algorithm requires tuning several parameters in order to generate a "nice" Mapper graph. The paper focuses on selecting the cover parameter. We present an algorithm that optimizes the cover of a Mapper graph by splitting a cover repeatedly according to a statistical test for normality. Our algorithm is based on GG-means clustering which searches for the optimal number of clusters in kk-means by conducting iteratively the Anderson-Darling test. Our splitting procedure employs a Gaussian mixture model in order to choose carefully the cover based on the distribution of a given data. Experiments for synthetic and real-world datasets demonstrate that our algorithm generates covers so that the Mapper graphs retain the essence of the datasets

    Design patterns for data-driven news articles

    Get PDF
    Technological advancements have resulted in great shifts in the production and consumption of news articles, which leads to the need to develop new educational and practical frameworks. This paper presents a classification of data-driven news articles and presents patterns to describe their visual and textual components. Through the analysis of 162 data-driven news articles collected from news media, we identified five types of articles based on the level of data involvement and narrative complexity: Quick Update, Briefing, Chart Description, Investigation, and In-depth Investigation. We then developed 72 design patterns to support the understanding and construction of data-driven news articles. To evaluate this approach, we conducted workshops with 23 students from journalism, design, and sociology who were newly introduced to the subject. Findings suggest that our approach can be used as an out-of-box framework for the formulation of plans and consideration of details in the workflow of data-driven news creation

    Designing a 3D Gestural Interface to Support User Interaction with Time-Oriented Data as Immersive 3D Radar Chart

    Full text link
    The design of intuitive three-dimensional user interfaces is vital for interaction in virtual reality, allowing to effectively close the loop between a human user and the virtual environment. The utilization of 3D gestural input allows for useful hand interaction with virtual content by directly grasping visible objects, or through invisible gestural commands that are associated with corresponding features in the immersive 3D space. The design of such interfaces remains complex and challenging. In this article, we present a design approach for a three-dimensional user interface using 3D gestural input with the aim to facilitate user interaction within the context of Immersive Analytics. Based on a scenario of exploring time-oriented data in immersive virtual reality using 3D Radar Charts, we implemented a rich set of features that is closely aligned with relevant 3D interaction techniques, data analysis tasks, and aspects of hand posture comfort. We conducted an empirical evaluation (n=12), featuring a series of representative tasks to evaluate the developed user interface design prototype. The results, based on questionnaires, observations, and interviews, indicate good usability and an engaging user experience. We are able to reflect on the implemented hand-based grasping and gestural command techniques, identifying aspects for improvement in regard to hand detection and precision as well as emphasizing a prototype's ability to infer user intent for better prevention of unintentional gestures.Comment: 30 pages, 6 figures, 2 table

    Boundary Labeling for Rectangular Diagrams

    Get PDF
    Given a set of n points (sites) inside a rectangle R and n points (label locations or ports) on its boundary, a boundary labeling problem seeks ways of connecting every site to a distinct port while achieving different labeling aesthetics. We examine the scenario when the connecting lines (leaders) are drawn as axis-aligned polylines with few bends, every leader lies strictly inside R, no two leaders cross, and the sum of the lengths of all the leaders is minimized. In a k-sided boundary labeling problem, where 1 <= k <= 4, the label locations are located on the k consecutive sides of R. In this paper we develop an O(n^3 log n)-time algorithm for 2-sided boundary labeling, where the leaders are restricted to have one bend. This improves the previously best known O(n^8 log n)-time algorithm of Kindermann et al. (Algorithmica, 76(1):225-258, 2016). We show the problem is polynomial-time solvable in more general settings such as when the ports are located on more than two sides of R, in the presence of obstacles, and even when the objective is to minimize the total number of bends. Our results improve the previous algorithms on boundary labeling with obstacles, as well as provide the first polynomial-time algorithms for minimizing the total leader length and number of bends for 3- and 4-sided boundary labeling. These results settle a number of open questions on the boundary labeling problems (Wolff, Handbook of Graph Drawing, Chapter 23, Table 23.1, 2014)

    Bridging Objective and Subjective Evaluations in Data Visualization: A Crossover Experiment

    Get PDF
    One of the problems affecting evaluation in the design and adoption of HCI technology is that neither objective nor subjective measures are sufficient when taken alone or individually. This paper proposes a crossover approach, making sense of objective and subjective evaluation methods by hypothesizing them as constitutive of each other’s explanation. Objective image features borrowed from image processing may explain or being explained in terms of validated qualitative items for infographics value-in-use and qualitative labelling from users’ interaction. These methods are all applied to the evaluation of a small set of Data Vizualizations (Data Viz from now on). Image features are computed first, in order to provide a varied-features Data Viz selection from researchers; the subjective part of the evaluation is accomplished by the 98 participants of an experiment, who interacted with pairs of Data Viz by executing a task, then using the validated items of the Infographics-Value (IGV) short scale, and adding free qualitative comments. Crossing over these dimensions shows that: a high feature congestion in a Data Viz can hinder its perceived intuitiveness and clarity; a poorly distributed saliency may impact intuitiveness and clarity too; a high colorfulness may influence the perceived beauty; both saliency and colorfulness may impact on the perceived usefulness, informativity, and beauty. Furthermore, colorfulness can improve or worsen the perceived overall quality of design and quality of interaction when used and combined with feature congestion; and saliency may improve or worsen the perceived beauty when interacting with colorfulness. These results show how objective and subjective evaluations may be exploited as each other’s explanations for improving the evaluation process during both design and user experience with Data Viz. Based on this experiment, the importance of crossing-over quantitative and qualitative Data Viz evaluation is argued, and motivations to the exploitation of a combination of approaches instead of the application of one approach alone are supported. This contribution intends to lead towards a holistic Data Viz quality assessment method, able to provide a virtuous cycle enforcing both quantitative and qualitative approaches during all the phases of a Data Viz evaluation life

    Visual Event Cueing in Linked Spatiotemporal Data

    Get PDF
    abstract: The media disperses a large amount of information daily pertaining to political events social movements, and societal conflicts. Media pertaining to these topics, no matter the format of publication used, are framed a particular way. Framing is used not for just guiding audiences to desired beliefs, but also to fuel societal change or legitimize/delegitimize social movements. For this reason, tools that can help to clarify when changes in social discourse occur and identify their causes are of great use. This thesis presents a visual analytics framework that allows for the exploration and visualization of changes that occur in social climate with respect to space and time. Focusing on the links between data from the Armed Conflict Location and Event Data Project (ACLED) and a streaming RSS news data set, users can be cued into interesting events enabling them to form and explore hypothesis. This visual analytics framework also focuses on improving intervention detection, allowing users to hypothesize about correlations between events and happiness levels, and supports collaborative analysis.Dissertation/ThesisMasters Thesis Computer Science 201

    Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics

    Full text link
    The most successful Machine Learning (ML) systems remain complex black boxes to end-users, and even experts are often unable to understand the rationale behind their decisions. The lack of transparency of such systems can have severe consequences or poor uses of limited valuable resources in medical diagnosis, financial decision-making, and in other high-stake domains. Therefore, the issue of ML explanation has experienced a surge in interest from the research community to application domains. While numerous explanation methods have been explored, there is a need for evaluations to quantify the quality of explanation methods to determine whether and to what extent the offered explainability achieves the defined objective, and compare available explanation methods and suggest the best explanation from the comparison for a specific task. This survey paper presents a comprehensive overview of methods proposed in the current literature for the evaluation of ML explanations. We identify properties of explainability from the review of definitions of explainability. The identified properties of explainability are used as objectives that evaluation metrics should achieve. The survey found that the quantitative metrics for both model-based and example-based explanations are primarily used to evaluate the parsimony/simplicity of interpretability, while the quantitative metrics for attribution-based explanations are primarily used to evaluate the soundness of fidelity of explainability. The survey also demonstrated that subjective measures, such as trust and confidence, have been embraced as the focal point for the human-centered evaluation of explainable systems. The paper concludes that the evaluation of ML explanations is a multidisciplinary research topic. It is also not possible to define an implementation of evaluation metrics, which can be applied to all explanation methods.</jats:p
    corecore