21 research outputs found

    A Coloring Algorithm for Disambiguating Graph and Map Drawings

    Full text link
    Drawings of non-planar graphs always result in edge crossings. When there are many edges crossing at small angles, it is often difficult to follow these edges, because of the multiple visual paths resulted from the crossings that slow down eye movements. In this paper we propose an algorithm that disambiguates the edges with automatic selection of distinctive colors. Our proposed algorithm computes a near optimal color assignment of a dual collision graph, using a novel branch-and-bound procedure applied to a space decomposition of the color gamut. We give examples demonstrating the effectiveness of this approach in clarifying drawings of real world graphs and maps

    Peacock Bundles: Bundle Coloring for Graphs with Globality-Locality Trade-off

    Full text link
    Bundling of graph edges (node-to-node connections) is a common technique to enhance visibility of overall trends in the edge structure of a large graph layout, and a large variety of bundling algorithms have been proposed. However, with strong bundling, it becomes hard to identify origins and destinations of individual edges. We propose a solution: we optimize edge coloring to differentiate bundled edges. We quantify strength of bundling in a flexible pairwise fashion between edges, and among bundled edges, we quantify how dissimilar their colors should be by dissimilarity of their origins and destinations. We solve the resulting nonlinear optimization, which is also interpretable as a novel dimensionality reduction task. In large graphs the necessary compromise is whether to differentiate colors sharply between locally occurring strongly bundled edges ("local bundles"), or also between the weakly bundled edges occurring globally over the graph ("global bundles"); we allow a user-set global-local tradeoff. We call the technique "peacock bundles". Experiments show the coloring clearly enhances comprehensibility of graph layouts with edge bundling.Comment: Appears in the Proceedings of the 24th International Symposium on Graph Drawing and Network Visualization (GD 2016

    Visualization of Large Networks Using Recursive Community Detection

    Get PDF
    Networks show relationships between people or things. For instance, a person has a social network of friends, and websites are connected through a network of hyperlinks. Networks are most commonly represented as graphs, so graph drawing becomes significant for network visualization. An effective graph drawing can quickly reveal connections and patterns within a network that would be difficult to discern without visual aid. But graph drawing becomes a challenge for large networks. Am- biguous edge crossings are inevitable in large networks with numerous nodes and edges, and large graphs often become a complicated tangle of lines. These issues greatly reduce graph readability and makes analyzing complex networks an arduous task. This project aims to address the large network visualization problem by com- bining recursive community detection, node size scaling, layout formation, labeling, edge coloring, and interactivity to make large graphs more readable. Experiments are performed on five known datasets to test the effectiveness of the proposed approach. A survey of the visualization results is conducted to measure the results

    Single View Modeling and View Synthesis

    Get PDF
    This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments. In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm. Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work. In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video

    A Digital Library Approach to the Reconstruction of Ancient Sunken Ships

    Get PDF
    Throughout the ages, countless shipwrecks have left behind a rich historical and technological legacy. In this context, nautical archaeologists study the remains of these boats and ships and the cultures that created and used them. Ship reconstruction can be seen as an incomplete jigsaw reconstruction problem. Therefore, I hypothesize that a computational approach based on digital libraries can enhance the reconstruction of a composite object (ship) from fragmented, incomplete, and damaged pieces (timbers and ship remains). This dissertation describes a framework for enabling the integration of textual and visual information pertaining to wooden vessels from sources in multiple languages. Linking related pieces of information relies on query expansion and improving relevance. This is accomplished with the implementation of an algorithm that derives relationships from terms in a specialized glossary, combining them with properties and concepts expressed in an ontology. The main archaeological sources used in this dissertation are data generated from a 17th-century Portuguese ship, the Pepper Wreck, complemented with information obtained from other documented and studied shipwrecks. Shipbuilding treatises spanning from the late 16th- to the 19th-centuries provide textual sources along with various illustrations. Additional visual materials come from a repository of photographs and drawings documenting numerous underwater excavations and surveys. The ontology is based on a rich database of archaeological information compiled by Mr. Richard Steffy. The original database was analyzed and transformed into an ontological representation in RDF-OWL. Its creation followed an iterative methodology which included numerous revisions by nautical archaeologists. Although this ontology does not pretend to be a final version, it provides a robust conceptualization. The proposed approach is evaluated by measuring the usefulness of the glossary and the ontology. Evaluation results show improvements in query expansion across languages based on Blind Relevance Feedback using the glossary as query expansion collection. Similarly, contextualization was also improved by using the ontology for categorizing query results. These results suggest that related external sources can be exploited to better contextualize information in a particular domain. Given the characteristics of the materials in nautical archaeology, the framework proposed in this dissertation can be adapted and extended to other domains

    Advancing Urban Flood Resilience With Smart Water Infrastructure

    Full text link
    Advances in wireless communications and low-power electronics are enabling a new generation of smart water systems that will employ real-time sensing and control to solve our most pressing water challenges. In a future characterized by these systems, networks of sensors will detect and communicate flood events at the neighborhood scale to improve disaster response. Meanwhile, wirelessly-controlled valves and pumps will coordinate reservoir releases to halt combined sewer overflows and restore water quality in urban streams. While these technologies promise to transform the field of water resources engineering, considerable knowledge gaps remain with regards to how smart water systems should be designed and operated. This dissertation presents foundational work towards building the smart water systems of the future, with a particular focus on applications to urban flooding. First, I introduce a first-of-its-kind embedded platform for real-time sensing and control of stormwater systems that will enable emergency managers to detect and respond to urban flood events in real-time. Next, I introduce new methods for hydrologic data assimilation that will enable real-time geolocation of floods and water quality hazards. Finally, I present theoretical contributions to the problem of controller placement in hydraulic networks that will help guide the design of future decentralized flood control systems. Taken together, these contributions pave the way for adaptive stormwater infrastructure that will mitigate the impacts of urban flooding through real-time response.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163144/1/mdbartos_1.pd

    Annotations of maps in collaborative work at a distance

    Get PDF
    This thesis inquires how map annotations can be used to sustain remote collaboration. Maps condense the interplay of space and communication, solving linguistic references by linking conversational content to the actual places to which it refers. This is a mechanism people are accustomed to. When we are face-to-face, we can point to things around us. However, at a distance, we need to recreate a context that can help disambiguate what we mean. A map can help recreate this context. However other technological solutions are required to allow deictic gestures over a shared map when collaborators are not co-located. This mechanism is here termed Explicit Referencing. Several systems that allow sharing maps annotations are reviewed critically. A taxonomy is then proposed to compare their features. Two filed experiments were conducted to investigate the production of collaborative annotations of maps with mobile devices, looking for the reasons why people might want to produce these notes and how they might do so. Both studies led to very disappointing results. The reasons for this failure are attributed to the lack of a critical mass of users (social network), the lack of useful content, and limited social awareness. More importantly, the study identified a compelling effect of the way messages were organized in the tested application, which caused participants to refrain from engaging in content-driven explorations and synchronous discussions. This last qualitative observation was refined in a controlled experiment where remote participants had to solve a problem collaboratively, using chat tools that differed in the way a user could relate an utterance to a shared map. Results indicated that team performance is improved by the Explicit Referencing mechanisms. However, when this is implemented in a way that is detrimental to the linearity of the conversation, resulting in the visual dispersion or scattering of messages, its use has negative consequences for collaborative work at a distance. Additionally, an analysis of the eye movements of the participants over the map helped to ascertain the interplay of deixis and gaze in collaboration. A primary relation was found between the pair's recurrence of eye movements and their task performance. Finally, this thesis presents an algorithm that detects misunderstandings in collaborative work at a distance. It analyses the movements of collaborators' eyes over the shared map, their utterances containing references to this workspace, and the availability of "remote" deictic gestures. The algorithm associates the distance between the gazes of the emitter and gazes of the receiver of a message with the probability that the recipient did not understand the message

    "Seeing Like A Rover": Images In Interaction On The Mars Exploration Rover Mission

    Full text link
    This dissertation analyzes the use of images on the Mars Exploration Rover mission to both conduct scientific investigations of Mars and plan robotic operations on its surface. Drawing upon three years of fieldwork with the Mars Rover team including ethnography, participant observation, and interviews, the dissertation contributes to the literature in Science and Technology Studies by advancing the analytical framework of drawing as: a practical corollary to Wittgenstein and Hanson's concepts of seeing as that allows the analyst to explore the work of producing scientific images that draw natural objects as analytical objects to enable future representations and interactions. Further, images of Mars betray the social organization of the mission team and its commitment to consensus operations. Observing how images of Mars are drawn as trustworthy documents, drawn as a hypothesis or as a record of collective agreement, drawn as a map for the Rover and drawn as a public space, the disertation demonstrates how interactions with and around Mars Rover images support this political orientation, making the Rover's body a body politic
    corecore