2,705 research outputs found

    Visual exploration of semantic-web-based knowledge structures

    Get PDF
    Humans have a curious nature and seek a better understanding of the world. Data, in- formation, and knowledge became assets of our modern society through the information technology revolution in the form of the internet. However, with the growing size of accumulated data, new challenges emerge, such as searching and navigating in these large collections of data, information, and knowledge. The current developments in academic and industrial contexts target the corresponding challenges using Semantic Web techno- logies. The Semantic Web is an extension of the Web and provides machine-readable representations of knowledge for various domains. These machine-readable representations allow intelligent machine agents to understand the meaning of the data and information; and enable additional inference of new knowledge. Generally, the Semantic Web is designed for information exchange and its processing and does not focus on presenting such semantically enriched data to humans. Visualizations support exploration, navigation, and understanding of data by exploiting humans’ ability to comprehend complex data through visual representations. In the context of Semantic- Web-Based knowledge structures, various visualization methods and tools are available, and new ones are being developed every year. However, suitable visualizations are highly dependent on individual use cases and targeted user groups. In this thesis, we investigate visual exploration techniques for Semantic-Web-Based knowledge structures by addressing the following challenges: i) how to engage various user groups in modeling such semantic representations; ii) how to facilitate understanding using customizable visual representations; and iii) how to ease the creation of visualizations for various data sources and different use cases. The achieved results indicate that visual modeling techniques facilitate the engagement of various user groups in ontology modeling. Customizable visualizations enable users to adjust visualizations to the current needs and provide different views on the data. Additionally, customizable visualization pipelines enable rapid visualization generation for various use cases, data sources, and user group

    DesignSense: A Visual Analytics Interface for Navigating Generated Design Spaces

    Get PDF
    Generative Design (GD) produces many design alternatives and promises novel and performant solutions to architectural design problems. The success of GD rests on the ability to navigate the generated alternatives in a way that is unhindered by their number and in a manner that reflects design judgment, with its quantitative and qualitative dimensions. I address this challenge by critically analyzing the literature on design space navigation (DSN) tools through a set of iteratively developed lenses. The lenses are informed by domain experts\u27 feedback and behavioural studies on design navigation under choice-overload conditions. The lessons from the analysis shaped DesignSense, which is a DSN tool that relies on visual analytics techniques for selecting, inspecting, clustering and grouping alternatives. Furthermore, I present case studies of navigating realistic GD datasets from architecture and game design. Finally, I conduct a formative focus group evaluation with design professionals that shows the tool\u27s potential and highlights future directions

    Developing an interactive overview for non-visual exploration of tabular numerical information

    Get PDF
    This thesis investigates the problem of obtaining overview information from complex tabular numerical data sets non-visually. Blind and visually impaired people need to access and analyse numerical data, both in education and in professional occupations. Obtaining an overview is a necessary first step in data analysis, for which current non-visual data accessibility methods offer little support. This thesis describes a new interactive parametric sonification technique called High-Density Sonification (HDS), which facilitates the process of extracting overview information from the data easily and efficiently by rendering multiple data points as single auditory events. Beyond obtaining an overview of the data, experimental studies showed that the capabilities of human auditory perception and cognition to extract meaning from HDS representations could be used to reliably estimate relative arithmetic mean values within large tabular data sets. Following a user-centred design methodology, HDS was implemented as the primary form of overview information display in a multimodal interface called TableVis. This interface supports the active process of interactive data exploration non-visually, making use of proprioception to maintain contextual information during exploration (non-visual focus+context), vibrotactile data annotations (EMA-Tactons) that can be used as external memory aids to prevent high mental workload levels, and speech synthesis to access detailed information on demand. A series of empirical studies was conducted to quantify the performance attained in the exploration of tabular data sets for overview information using TableVis. This was done by comparing HDS with the main current non-visual accessibility technique (speech synthesis), and by quantifying the effect of different sizes of data sets on user performance, which showed that HDS resulted in better performance than speech, and that this performance was not heavily dependent on the size of the data set. In addition, levels of subjective workload during exploration tasks using TableVis were investigated, resulting in the proposal of EMA-Tactons, vibrotactile annotations that the user can add to the data in order to prevent working memory saturation in the most demanding data exploration scenarios. An experimental evaluation found that EMA-Tactons significantly reduced mental workload in data exploration tasks. Thus, the work described in this thesis provides a basis for the interactive non-visual exploration of a broad range of sizes of numerical data tables by offering techniques to extract overview information quickly, performing perceptual estimations of data descriptors (relative arithmetic mean) and managing demands on mental workload through vibrotactile data annotations, while seamlessly linking with explorations at different levels of detail and preserving spatial data representation metaphors to support collaboration with sighted users

    The State-of-the-Art of Set Visualization

    Get PDF
    Sets comprise a generic data model that has been used in a variety of data analysis problems. Such problems involve analysing and visualizing set relations between multiple sets defined over the same collection of elements. However, visualizing sets is a non-trivial problem due to the large number of possible relations between them. We provide a systematic overview of state-of-the-art techniques for visualizing different kinds of set relations. We classify these techniques into six main categories according to the visual representations they use and the tasks they support. We compare the categories to provide guidance for choosing an appropriate technique for a given problem. Finally, we identify challenges in this area that need further research and propose possible directions to address these challenges. Further resources on set visualization are available at http://www.setviz.net

    Interactive Visual Displays for Results Management in Complex Medical Workflows

    Get PDF
    Clinicians manage medical orders to ensure that the results are returned promptly to the correct physician and followed up on time. Delays in results management occur frequently, physically harm patients, and often cause malpractice litigation. Better tracking of medical orders that showed progress and indicated delays, could result in improved care, better safety, and reduced clinician effort. This dissertation presents novel displays of rich tables with an interaction technique called ARCs (Actions for Rapid Completion). Rich tables are generated by MStart (Multi-Step Task Analyzing, Reporting, and Tracking) from a workflow model that defines order processes. Rich tables help clinicians perceive each order's status, prioritize the critical ones, and act on results in a timely fashion. A second contribution is the design of an interactive visualization called MSProVis (Multi-Step Process Visualization), which is composed of several PCDs (Process Completion Diagrams) that show the number and duration of in-time, late, and not-completed orders. With MSProVis, managers perform retrospective analyses to make decisions by studying an overview of the order process, durations of order steps, and performances of individuals. I visited seven hospitals and clinics to define sample results management workflows. Iterative design reviews with clinicians, designers, and researchers led to refinements of the rich tables, ARCs, and design guidelines. A controlled experiment with 18 participants under time pressure and distractions tested two features (showing pending orders and prioritizing by lateness) of rich tables. These changes statistically significantly reduce the time from nine to one minute to correctly identify late orders compared to the traditional chronologically-ordered lists. Another study demonstrated that ARCs speed performance up by 25% compared to state-of-the-art systems. A usability study with two clinicians and five novices showed that participants were able to understand MSProVis and efficiently perform representative tasks. Two subjective preference surveys suggested new design choices for the PCDs. This dissertation provides designers of results management systems with clear guidance about showing pending results and prioritizing by lateness, and tested strategies for performing retrospective analyses. It also offers detailed design guidelines for results management, tables, and integrated actions on tables that speed performance for common tasks

    A Systematic and Minimalist Approach to Lower Barriers in Visual Data Exploration

    Get PDF
    With the increasing availability and impact of data in our lives, we need to make quicker, more accurate, and intricate data-driven decisions. We can see and interact with data, and identify relevant features, trends, and outliers through visual data representations. In addition, the outcomes of data analysis reflect our cognitive processes, which are strongly influenced by the design of tools. To support visual and interactive data exploration, this thesis presents a systematic and minimalist approach. First, I present the Cognitive Exploration Framework, which identifies six distinct cognitive stages and provides a high-level structure to design guidelines, and evaluation of analysis tools. Next, in order to reduce decision-making complexities in creating effective interactive data visualizations, I present a minimal, yet expressive, model for tabular data using aggregated data summaries and linked selections. I demonstrate its application to common categorical, numerical, temporal, spatial, and set data types. Based on this model, I developed Keshif as an out-of-the-box, web-based tool to bootstrap the data exploration process. Then, I applied it to 160+ datasets across many domains, aiming to serve journalists, researchers, policy makers, businesses, and those tracking personal data. Using tools with novel designs and capabilities requires learning and help-seeking for both novices and experts. To provide self-service help for visual data interfaces, I present a data-driven contextual in-situ help system, HelpIn, which contrasts with separated and static videos and manuals. Lastly, I present an evaluation on design and graphical perception for dense visualization of sorted numeric data. I contrast the non-hierarchical treemaps against two multi-column chart designs, wrapped bars and piled bars. The results support that multi-column charts are perceptually more accurate than treemaps, and the unconventional piled bars may require more training to read effectively. This thesis contributes to our understanding on how to create effective data interfaces by systematically focusing on human-facing challenges through minimalist solutions. Future work to extend the power of data analysis to a broader public should continue to evaluate and improve design approaches to address many remaining cognitive, social, educational, and technical challenges

    Translation Alignment Applied to Historical Languages: methods, evaluation, applications, and visualization

    Get PDF
    Translation alignment is an essential task in Digital Humanities and Natural Language Processing, and it aims to link words/phrases in the source text with their translation equivalents in the translation. In addition to its importance in teaching and learning historical languages, translation alignment builds bridges between ancient and modern languages through which various linguistics annotations can be transferred. This thesis focuses on word-level translation alignment applied to historical languages in general and Ancient Greek and Latin in particular. As the title indicates, the thesis addresses four interdisciplinary aspects of translation alignment. The starting point was developing Ugarit, an interactive annotation tool to perform manual alignment aiming to gather training data to train an automatic alignment model. This effort resulted in more than 190k accurate translation pairs that I used for supervised training later. Ugarit has been used by many researchers and scholars also in the classroom at several institutions for teaching and learning ancient languages, which resulted in a large, diverse crowd-sourced aligned parallel corpus allowing us to conduct experiments and qualitative analysis to detect recurring patterns in annotators’ alignment practice and the generated translation pairs. Further, I employed the recent advances in NLP and language modeling to develop an automatic alignment model for historical low-resourced languages, experimenting with various training objectives and proposing a training strategy for historical languages that combines supervised and unsupervised training with mono- and multilingual texts. Then, I integrated this alignment model into other development workflows to project cross-lingual annotations and induce bilingual dictionaries from parallel corpora. Evaluation is essential to assess the quality of any model. To ensure employing the best practice, I reviewed the current evaluation procedure, defined its limitations, and proposed two new evaluation metrics. Moreover, I introduced a visual analytics framework to explore and inspect alignment gold standard datasets and support quantitative and qualitative evaluation of translation alignment models. Besides, I designed and implemented visual analytics tools and reading environments for parallel texts and proposed various visualization approaches to support different alignment-related tasks employing the latest advances in information visualization and best practice. Overall, this thesis presents a comprehensive study that includes manual and automatic alignment techniques, evaluation methods and visual analytics tools that aim to advance the field of translation alignment for historical languages

    Layered evaluation of interactive adaptive systems : framework and formative methods

    Get PDF
    Peer reviewedPostprin
    corecore