425 research outputs found

    Visualization and Evolution of Software Architectures

    Get PDF
    Software systems are an integral component of our everyday life as we find them in tools and embedded in equipment all around us. In order to ensure smooth, predictable, and accurate operation of these systems, it is crucial to produce and maintain systems that are highly reliable. A well-designed and well-maintained architecture goes a long way in achieving this goal. However, due to the intangible and often complex nature of software architecture, this task can be quite complicated. The field of software architecture visualization aims to ease this task by providing tools and techniques to examine the hierarchy, relationship, evolution, and quality of architecture components. In this paper, we present a discourse on the state of the art of software architecture visualization techniques. Further, we highlight the importance of developing solutions tailored to meet the needs and requirements of the stakeholders involved in the analysis process

    Visualization of large citation networks with space-efficient multi-layer optimization

    Full text link
    This paper describes a technique for visualizing large citation networks (or bibliography networks) using a space-efficient multi-layer optimization visualization, technique. Our technique first use a fast clustering algorithm to discover community structure in the bibliographic networks. The clustering process partitions an entire network into relevant abstract subgroups so that the visualization, can provide a clearer and less density of display of global view of the complete graph of citations. We next use a new space-efficient visualization algorithm to archive the optimization of graph layout within the limited display space so that our technique can theoretically handle a very large bibliography network with several thousands of elements. Our technique also employs rich graphics to enhance the attributed property of the visualization including publication years and number of citations. Finally, the system provides an interaction technique in cooperating with the layout to allow users to navigate through the citation network. Animation is also implemented to preserve the users' mental maps during the interaction

    Information Visualization (iV): Notes about the 9th IV ’05 International Conference, London, England

    Get PDF
    This review tells about the International Conference on Information Visualization that is held annually in London, England. Themes selected from the Conference Proceedings are focused on theoretical concepts, semantic approach to visualization, digital art, and involve 2D, 3D, interactive and virtual reality tools and applications. The focal point of the iV 05 Conference was the progress in information and knowledge visualization, visual data mining, multimodal interfaces, multimedia, web graphics, graph theory application, augmented and virtual reality, semantic web visualization, HCI, digital art, among many other areas such as information visualization in geology, medicine, industry and education

    Visualization of the Static aspects of Software: a survey

    Get PDF
    International audienceSoftware is usually complex and always intangible. In practice, the development and maintenance processes are time-consuming activities mainly because software complexity is difficult to manage. Graphical visualization of software has the potential to result in a better and faster understanding of its design and functionality, saving time and providing valuable information to improve its quality. However, visualizing software is not an easy task because of the huge amount of information comprised in the software. Furthermore, the information content increases significantly once the time dimension to visualize the evolution of the software is taken into account. Human perception of information and cognitive factors must thus be taken into account to improve the understandability of the visualization. In this paper, we survey visualization techniques, both 2D- and 3D-based, representing the static aspects of the software and its evolution. We categorize these techniques according to the issues they focus on, in order to help compare them and identify the most relevant techniques and tools for a given problem

    Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction

    Get PDF
    This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\''s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\''s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve

    A framework for unifying presentation space

    Get PDF

    Close and Distant Reading Visualizations for the Comparative Analysis of Digital Humanities Data

    Get PDF
    Traditionally, humanities scholars carrying out research on a specific or on multiple literary work(s) are interested in the analysis of related texts or text passages. But the digital age has opened possibilities for scholars to enhance their traditional workflows. Enabled by digitization projects, humanities scholars can nowadays reach a large number of digitized texts through web portals such as Google Books or Internet Archive. Digital editions exist also for ancient texts; notable examples are PHI Latin Texts and the Perseus Digital Library. This shift from reading a single book “on paper” to the possibility of browsing many digital texts is one of the origins and principal pillars of the digital humanities domain, which helps developing solutions to handle vast amounts of cultural heritage data – text being the main data type. In contrast to the traditional methods, the digital humanities allow to pose new research questions on cultural heritage datasets. Some of these questions can be answered with existent algorithms and tools provided by the computer science domain, but for other humanities questions scholars need to formulate new methods in collaboration with computer scientists. Developed in the late 1980s, the digital humanities primarily focused on designing standards to represent cultural heritage data such as the Text Encoding Initiative (TEI) for texts, and to aggregate, digitize and deliver data. In the last years, visualization techniques have gained more and more importance when it comes to analyzing data. For example, Saito introduced her 2010 digital humanities conference paper with: “In recent years, people have tended to be overwhelmed by a vast amount of information in various contexts. Therefore, arguments about ’Information Visualization’ as a method to make information easy to comprehend are more than understandable.” A major impulse for this trend was given by Franco Moretti. In 2005, he published the book “Graphs, Maps, Trees”, in which he proposes so-called distant reading approaches for textual data that steer the traditional way of approaching literature towards a completely new direction. Instead of reading texts in the traditional way – so-called close reading –, he invites to count, to graph and to map them. In other words, to visualize them. This dissertation presents novel close and distant reading visualization techniques for hitherto unsolved problems. Appropriate visualization techniques have been applied to support basic tasks, e.g., visualizing geospatial metadata to analyze the geographical distribution of cultural heritage data items or using tag clouds to illustrate textual statistics of a historical corpus. In contrast, this dissertation focuses on developing information visualization and visual analytics methods that support investigating research questions that require the comparative analysis of various digital humanities datasets. We first take a look at the state-of-the-art of existing close and distant reading visualizations that have been developed to support humanities scholars working with literary texts. We thereby provide a taxonomy of visualization methods applied to show various aspects of the underlying digital humanities data. We point out open challenges and we present our visualizations designed to support humanities scholars in comparatively analyzing historical datasets. In short, we present (1) GeoTemCo for the comparative visualization of geospatial-temporal data, (2) the two tag cloud designs TagPies and TagSpheres that comparatively visualize faceted textual summaries, (3) TextReuseGrid and TextReuseBrowser to explore re-used text passages among the texts of a corpus, (4) TRAViz for the visualization of textual variation between multiple text editions, and (5) the visual analytics system MusikerProfiling to detect similar musicians to a given musician of interest. Finally, we summarize our and the collaboration experiences of other visualization researchers to emphasize the ingredients required for a successful project in the digital humanities, and we take a look at future challenges in that research field

    Software Analytics for Improving Program Comprehension

    Get PDF
    Title from PDF of title page viewed June 28, 2021Dissertation advisor: Yugyung LeeVitaIncludes bibliographical references (pages 122-143)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2021Program comprehension is an essential part of software development and maintenance. Traditional methods of program comprehension, such as reviewing the codebase and documentation, are still challenging for understanding the software's overall structure and implementation. In recent years, software static analysis studies have emerged to facilitate program comprehensions, such as call graphs, which represent the system’s structure and its implementation as a directed graph. Furthermore, some studies focused on semantic enrichment of the software system problems using systematic learning analytics, including machine learning and NLP. While call graphs can enhance the program comprehension process, they still face three main challenges: (1) complex call graphs can become very difficult to understand making call graphs much harder to visualize and interpret by a developer and thus increases the overhead in program comprehension; (2) they are often limited to a single level of granularity, such as function calls; and (3) there is a lack of the interpretation semantics about the graphs. In this dissertation, we propose a novel framework, called CodEx, to facilitate and accelerate program comprehension. CodEx enables top-down and bottom-up analysis of the system's call graph and its execution paths for an enhanced program comprehension experience. Specifically, the proposed framework is designed to cope with the following techniques: multi-level graph abstraction using a coarsening technique, hierarchical clustering to represent the call graph into subgraphs (i.e., multi-levels of granularity), and interactive visual exploration of the graphs at different levels of abstraction. Moreover, we are also worked on building semantics of software systems using NLP and machine learning, including topic modeling, to interpret the meaning of the abstraction levels of the call graph.Introduction -- Multi-Level Call Graph for Program Comprehension -- Static Trace Clustering: Single-Level Approach -- Static Trace Clustering: Multi-Level Approach -- Topic Modeling for Cluster Analysis -- Visual Exploration of Software Clustered Traces -- Conclusion and Feature Work -- Appendi

    Cognitive Foundations for Visual Analytics

    Get PDF
    In this report, we provide an overview of scientific/technical literature on information visualization and VA. Topics discussed include an update and overview of the extensive literature search conducted for this study, the nature and purpose of the field, major research thrusts, and scientific foundations. We review methodologies for evaluating and measuring the impact of VA technologies as well as taxonomies that have been proposed for various purposes to support the VA community. A cognitive science perspective underlies each of these discussions

    An Uncertainty Visual Analytics Framework for Functional Magnetic Resonance Imaging

    Get PDF
    Improving understanding of the human brain is one of the leading pursuits of modern scientific research. Functional magnetic resonance imaging (fMRI) is a foundational technique for advanced analysis and exploration of the human brain. The modality scans the brain in a series of temporal frames which provide an indication of the brain activity either at rest or during a task. The images can be used to study the workings of the brain, leading to the development of an understanding of healthy brain function, as well as characterising diseases such as schizophrenia and bipolar disorder. Extracting meaning from fMRI relies on an analysis pipeline which can be broadly categorised into three phases: (i) data acquisition and image processing; (ii) image analysis; and (iii) visualisation and human interpretation. The modality and analysis pipeline, however, are hampered by a range of uncertainties which can greatly impact the study of the brain function. Each phase contains a set of required and optional steps, containing inherent limitations and complex parameter selection. These aspects lead to the uncertainty that impacts the outcome of studies. Moreover, the uncertainties that arise early in the pipeline, are compounded by decisions and limitations further along in the process. While a large amount of research has been undertaken to examine the limitations and variable parameter selection, statistical approaches designed to address the uncertainty have not managed to mitigate the issues. Visual analytics, meanwhile, is a research domain which seeks to combine advanced visual interfaces with specialised interaction and automated statistical processing designed to exploit human expertise and understanding. Uncertainty visual analytics (UVA) tools, which aim to minimise and mitigate uncertainties, have been proposed for a variety of data, including astronomical, financial, weather and crime. Importantly, UVA approaches have also seen success in medical imaging and analysis. However, there are many challenges surrounding the application of UVA to each research domain. Principally, these involve understanding what the uncertainties are and the possible effects so they may be connected to visualisation and interaction approaches. With fMRI, the breadth of uncertainty arising in multiple stages along the pipeline and the compound effects, make it challenging to propose UVAs which meaningfully integrate into pipeline. In this thesis, we seek to address this challenge by proposing a unified UVA framework for fMRI. To do so, we first examine the state-of-the-art landscape of fMRI uncertainties, including the compound effects, and explore how they are currently addressed. This forms the basis of a field we term fMRI-UVA. We then present our overall framework, which is designed to meet the requirements of fMRI visual analysis, while also providing an indication and understanding of the effects of uncertainties on the data. Our framework consists of components designed for the spatial, temporal and processed imaging data. Alongside the framework, we propose two visual extensions which can be used as standalone UVA applications or be integrated into the framework. Finally, we describe a conceptual algorithmic approach which incorporates more data into an existing measure used in the fMRI analysis pipeline
    • …
    corecore