183,812 research outputs found

    Live Visualization of Database Behavior for Large Software Landscapes: The RACCOON Approach

    Get PDF
    Databases are essential components within large software landscapes, since they are employed in almost every information system. Based on the growing complexity of software systems and a steadily increasing amount of data which is collected, processed, and stored in databases, it is difficult to obtain a live overview of these software landscapes. This often leads to an insufficient knowledge of the actual internal structure and behavior of employed databases. Furthermore, databases are often involved in performance issues within information systems. A solution to these problems is employing live visualizations of databases and related communication from applications within the software landscape. These visualizations allow operators to understand their databases in detail and to analyze database queries performed by applications. Based on established visualization concepts like the entity relationship diagrams and the 3D city metaphor, operators can be supported in the task of database comprehension. Established monitoring techniques, like dynamic and static analysis, can be used to capture necessary information from applications and databases. In this paper, we present our live visualization approach of databases and associated communication for large software landscapes. Our visualization offers two different views – a landscape-level and a database-level perspective. The landscape-level perspective provides an overview of monitored applications and related databases. The database-level perspective reveals database schemas within a database, shows contained tables and relationships, and allows for the inspection of executed queries based on the monitoring information collected at runtime

    The Development of an Undergraduate Data Curriculum: A Model for Maximizing Curricular Partnerships and Opportunities

    Get PDF
    The article provides the motivations and foundations for creating an interdisciplinary program between a Library and Information Science department and a Human-Centered Computing department. The program focuses on data studies and data science concepts, issues, and skill sets. In the paper, we analyze trends in Library and Information Science curricula, the emergence of data-related Library and Information Science curricula, and interdisciplinary data-related curricula. Then, we describe the development of the undergraduate data curriculum and provide the institutional context; discuss collaboration and resource optimization; provide justifications and workforce alignment; and detail the minor, major, and graduate opportunities. Finally, we argue that the proposed program holds the potential to model interdisciplinary, holistic data-centered curriculum development by complimenting Library and Information Science traditions (e.g., information organization, access, and ethics) with scholarly work in data science, specifically data visualization and analytics. There is a significant opportunity for Library and Information Science to add value to data science and analytics curricula, and vice versa

    Connecting the dots: information visualization and text analysis of the Searchlight Project newsletters

    Full text link
    This report is the product of the Pardee Center’s work on the Searchlight:Visualization and Analysis of Trend Data project sponsored by the Rockefeller Foundation. Part of a larger effort to analyze and disseminate on-the-ground information about important societal trends as reported in a large number of regional newsletters developed in Asia, Africa and the Americas specifically for the Foundation, the Pardee Center developed sophisticated methods to systematically review, categorize, analyze, visualize, and draw conclusions from the information in the newsletters.The Rockefeller Foundatio

    Do you see what I mean?

    Get PDF
    Visualizers, like logicians, have long been concerned with meaning. Generalizing from MacEachren's overview of cartography, visualizers have to think about how people extract meaning from pictures (psychophysics), what people understand from a picture (cognition), how pictures are imbued with meaning (semiotics), and how in some cases that meaning arises within a social and/or cultural context. If we think of the communication acts carried out in the visualization process further levels of meaning are suggested. Visualization begins when someone has data that they wish to explore and interpret; the data are encoded as input to a visualization system, which may in its turn interact with other systems to produce a representation. This is communicated back to the user(s), who have to assess this against their goals and knowledge, possibly leading to further cycles of activity. Each phase of this process involves communication between two parties. For this to succeed, those parties must share a common language with an agreed meaning. We offer the following three steps, in increasing order of formality: terminology (jargon), taxonomy (vocabulary), and ontology. Our argument in this article is that it's time to begin synthesizing the fragments and views into a level 3 model, an ontology of visualization. We also address why this should happen, what is already in place, how such an ontology might be constructed, and why now

    Multi Visualization and Dynamic Query for Effective Exploration of Semantic Data

    Get PDF
    Semantic formalisms represent content in a uniform way according to ontologies. This enables manipulation and reasoning via automated means (e.g. Semantic Web services), but limits the user’s ability to explore the semantic data from a point of view that originates from knowledge representation motivations. We show how, for user consumption, a visualization of semantic data according to some easily graspable dimensions (e.g. space and time) provides effective sense-making of data. In this paper, we look holistically at the interaction between users and semantic data, and propose multiple visualization strategies and dynamic filters to support the exploration of semantic-rich data. We discuss a user evaluation and how interaction challenges could be overcome to create an effective user-centred framework for the visualization and manipulation of semantic data. The approach has been implemented and evaluated on a real company archive

    A Concurrency-Agnostic Protocol for Multi-Paradigm Concurrent Debugging Tools

    Get PDF
    Today's complex software systems combine high-level concurrency models. Each model is used to solve a specific set of problems. Unfortunately, debuggers support only the low-level notions of threads and shared memory, forcing developers to reason about these notions instead of the high-level concurrency models they chose. This paper proposes a concurrency-agnostic debugger protocol that decouples the debugger from the concurrency models employed by the target application. As a result, the underlying language runtime can define custom breakpoints, stepping operations, and execution events for each concurrency model it supports, and a debugger can expose them without having to be specifically adapted. We evaluated the generality of the protocol by applying it to SOMns, a Newspeak implementation, which supports a diversity of concurrency models including communicating sequential processes, communicating event loops, threads and locks, fork/join parallelism, and software transactional memory. We implemented 21 breakpoints and 20 stepping operations for these concurrency models. For none of these, the debugger needed to be changed. Furthermore, we visualize all concurrent interactions independently of a specific concurrency model. To show that tooling for a specific concurrency model is possible, we visualize actor turns and message sends separately.Comment: International Symposium on Dynamic Language

    Lost in translation: data integration tools meet the Semantic Web (experiences from the Ondex project)

    Full text link
    More information is now being published in machine processable form on the web and, as de-facto distributed knowledge bases are materializing, partly encouraged by the vision of the Semantic Web, the focus is shifting from the publication of this information to its consumption. Platforms for data integration, visualization and analysis that are based on a graph representation of information appear first candidates to be consumers of web-based information that is readily expressible as graphs. The question is whether the adoption of these platforms to information available on the Semantic Web requires some adaptation of their data structures and semantics. Ondex is a network-based data integration, analysis and visualization platform which has been developed in a Life Sciences context. A number of features, including semantic annotation via ontologies and an attention to provenance and evidence, make this an ideal candidate to consume Semantic Web information, as well as a prototype for the application of network analysis tools in this context. By analyzing the Ondex data structure and its usage, we have found a set of discrepancies and errors arising from the semantic mismatch between a procedural approach to network analysis and the implications of a web-based representation of information. We report in the paper on the simple methodology that we have adopted to conduct such analysis, and on issues that we have found which may be relevant for a range of similar platformsComment: Presented at DEIT, Data Engineering and Internet Technology, 2011 IEEE: CFP1113L-CD
    • …
    corecore