11 research outputs found

    Robust network computation

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 91-98).In this thesis, we present various models of distributed computation and algorithms for these models. The underlying theme is to come up with fast algorithms that can tolerate faults in the underlying network. We begin with the classical message-passing model of computation, surveying many known results. We give a new, universally optimal, edge-biconnectivity algorithm for the classical model. We also give a near-optimal sub-linear algorithm for identifying bridges, when all nodes are activated simultaneously. After discussing some ways in which the classical model is unrealistic, we survey known techniques for adapting the classical model to the real world. We describe a new balancing model of computation. The intent is that algorithms in this model should be automatically fault-tolerant. Existing algorithms that can be expressed in this model are discussed, including ones for clustering, maximum flow, and synchronization. We discuss the use of agents in our model, and give new agent-based algorithms for census and biconnectivity. Inspired by the balancing model, we look at two problems in more depth.(cont.) First, we give matching upper and lower bounds on the time complexity of the census algorithm, and we show how the census algorithm can be used to name nodes uniquely in a faulty network. Second, we consider using discrete harmonic functions as a computational tool. These functions are a natural exemplar of the balancing model. We prove new results concerning the stability and convergence of discrete harmonic functions, and describe a method which we call Eulerization for speeding up convergence.by David Pritchard.M.Eng

    An Object-Oriented Algorithmic Laboratory for Ordering Sparse Matrices

    Get PDF
    We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to—but an inextricable part of—sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities for augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively generate the global ordering. Our software laboratory, “Spindle,” implements state-of-the-art ordering algorithms for sparse matrices and graphs. We have used it to examine and augment the behavior of existing algorithms and test new ones. Its 40,000+ lines of C++ code includes a base library test drivers, sample applications, and interfaces to C, C++, Matlab, and PETSc. Spindle is freely available and can be built on a variety of UNIX platforms as well as WindowsNT

    Graph-based Analysis of Dynamic Systems

    Get PDF
    The analysis of dynamic systems provides insights into their time-dependent characteristics. This enables us to monitor, evaluate, and improve systems from various areas. They are often represented as graphs that model the system's components and their relations. The analysis of the resulting dynamic graphs yields great insights into the system's underlying structure, its characteristics, as well as properties of single components. The interpretation of these results can help us understand how a system works and how parameters influence its performance. This knowledge supports the design of new systems and the improvement of existing ones. The main issue in this scenario is the performance of analyzing the dynamic graph to obtain relevant properties. While various approaches have been developed to analyze dynamic graphs, it is not always clear which one performs best for the analysis of a specific graph. The runtime also depends on many other factors, including the size and topology of the graph, the frequency of changes, and the data structures used to represent the graph in memory. While the benefits and drawbacks of many data structures are well-known, their runtime is hard to predict when used for the representation of dynamic graphs. Hence, tools are required to benchmark and compare different algorithms for the computation of graph properties and data structures for the representation of dynamic graphs in memory. Based on deeper insights into their performance, new algorithms can be developed and efficient data structures can be selected. In this thesis, we present four contributions to tackle these problems: A benchmarking framework for dynamic graph analysis, novel algorithms for the efficient analysis of dynamic graphs, an approach for the parallelization of dynamic graph analysis, and a novel paradigm to select and adapt graph data structures. In addition, we present three use cases from the areas of social, computer, and biological networks to illustrate the great insights provided by their graph-based analysis. We present a new benchmarking framework for the analysis of dynamic graphs, the Dynamic Network Analyzer (DNA). It provides tools to benchmark and compare different algorithms for the analysis of dynamic graphs as well as the data structures used to represent them in memory. DNA supports the development of new algorithms and the automatic verification of their results. Its visualization component provides different ways to represent dynamic graphs and the results of their analysis. We introduce three new stream-based algorithms for the analysis of dynamic graphs. We evaluate their performance on synthetic as well as real-world dynamic graphs and compare their runtimes to snapshot-based algorithms. Our results show great performance gains for all three algorithms. The new stream-based algorithm StreaM_k, which counts the frequencies of k-vertex motifs, achieves speedups up to 19,043 x for synthetic and 2882 x for real-world datasets. We present a novel approach for the distributed processing of dynamic graphs, called parallel Dynamic Graph Analysis (pDNA). To analyze a dynamic graph, the work is distributed by a partitioner that creates subgraphs and assigns them to workers. They compute the properties of their respective subgraph using standard algorithms. Their results are used by the collator component to merge them to the properties of the original graph. We evaluate the performance of pDNA for the computation of five graph properties on two real-world dynamic graphs with up to 32 workers. Our approach achieves great speedups, especially for the analysis of complex graph measures. We introduce two novel approaches for the selection of efficient graph data structures. The compile-time approach estimates the workload of an analysis after an initial profiling phase and recommends efficient data structures based on benchmarking results. It achieves speedups of up to 5.4 x over baseline data structure configurations for the analysis of real-word dynamic graphs. The run-time approach monitors the workload during analysis and exchanges the graph representation if it finds a configuration that promises to be more efficient for the current workload. Compared to baseline configurations, it achieves speedups up to 7.3 x for the analysis of a synthetic workload. Our contributions provide novel approaches for the efficient analysis of dynamic graphs and tools to further investigate the trade-offs between different factors that influence the performance.:1 Introduction 2 Notation and Terminology 3 Related Work 4 DNA - Dynamic Network Analyzer 5 Algorithms 6 Parallel Dynamic Network Analysis 7 Selection of Efficient Graph Data Structures 8 Use Cases 9 Conclusion A DNA - Dynamic Network Analyzer B Algorithms C Selection of Efficient Graph Data Structures D Parallel Dynamic Network Analysis E Graph-based Intrusion Detection System F Molecular Dynamic

    Information extraction with network centralities : finding rumor sources, measuring influence, and learning community structure

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 193-197).Network centrality is a function that takes a network graph as input and assigns a score to each node. In this thesis, we investigate the potential of network centralities for addressing inference questions arising in the context of large-scale networked data. These questions are particularly challenging because they require algorithms which are extremely fast and simple so as to be scalable, while at the same time they must perform well. It is this tension between scalability and performance that this thesis aims to resolve by using appropriate network centralities. Specifically, we solve three important network inference problems using network centrality: finding rumor sources, measuring influence, and learning community structure. We develop a new network centrality called rumor centrality to find rumor sources in networks. We give a linear time algorithm for calculating rumor centrality, demonstrating its practicality for large networks. Rumor centrality is proven to be an exact maximum likelihood rumor source estimator for random regular graphs (under an appropriate probabilistic rumor spreading model). For a wide class of networks and rumor spreading models, we prove that it is an accurate estimator. To establish the universality of rumor centrality as a source estimator, we utilize techniques from the classical theory of generalized Polya's urns and branching processes. Next we use rumor centrality to measure influence in Twitter. We develop an influence score based on rumor centrality which can be calculated in linear time. To justify the use of rumor centrality as the influence score, we use it to develop a new network growth model called topological network growth. We find that this model accurately reproduces two important features observed empirically in Twitter retweet networks: a power-law degree distribution and a superstar node with very high degree. Using these results, we argue that rumor centrality is correctly quantifying the influence of users on Twitter. These scores form the basis of a dynamic influence tracking engine called Trumor which allows one to measure the influence of users in Twitter or more generally in any networked data. Finally we investigate learning the community structure of a network. Using arguments based on social interactions, we determine that the network centrality known as degree centrality can be used to detect communities. We use this to develop the leader-follower algorithm (LFA) which can learn the overlapping community structure in networks. The LFA runtime is linear in the network size. It is also non-parametric, in the sense that it can learn both the number and size of communities naturally from the network structure without requiring any input parameters. We prove that it is very robust and learns accurate community structure for a broad class of networks. We find that the LFA does a better job of learning community structure on real social and biological networks than more common algorithms such as spectral clustering.by Tauhid R. Zaman.Ph.D

    Graph-based Analysis of Dynamic Systems

    Get PDF
    The analysis of dynamic systems provides insights into their time-dependent characteristics. This enables us to monitor, evaluate, and improve systems from various areas. They are often represented as graphs that model the system's components and their relations. The analysis of the resulting dynamic graphs yields great insights into the system's underlying structure, its characteristics, as well as properties of single components. The interpretation of these results can help us understand how a system works and how parameters influence its performance. This knowledge supports the design of new systems and the improvement of existing ones. The main issue in this scenario is the performance of analyzing the dynamic graph to obtain relevant properties. While various approaches have been developed to analyze dynamic graphs, it is not always clear which one performs best for the analysis of a specific graph. The runtime also depends on many other factors, including the size and topology of the graph, the frequency of changes, and the data structures used to represent the graph in memory. While the benefits and drawbacks of many data structures are well-known, their runtime is hard to predict when used for the representation of dynamic graphs. Hence, tools are required to benchmark and compare different algorithms for the computation of graph properties and data structures for the representation of dynamic graphs in memory. Based on deeper insights into their performance, new algorithms can be developed and efficient data structures can be selected. In this thesis, we present four contributions to tackle these problems: A benchmarking framework for dynamic graph analysis, novel algorithms for the efficient analysis of dynamic graphs, an approach for the parallelization of dynamic graph analysis, and a novel paradigm to select and adapt graph data structures. In addition, we present three use cases from the areas of social, computer, and biological networks to illustrate the great insights provided by their graph-based analysis. We present a new benchmarking framework for the analysis of dynamic graphs, the Dynamic Network Analyzer (DNA). It provides tools to benchmark and compare different algorithms for the analysis of dynamic graphs as well as the data structures used to represent them in memory. DNA supports the development of new algorithms and the automatic verification of their results. Its visualization component provides different ways to represent dynamic graphs and the results of their analysis. We introduce three new stream-based algorithms for the analysis of dynamic graphs. We evaluate their performance on synthetic as well as real-world dynamic graphs and compare their runtimes to snapshot-based algorithms. Our results show great performance gains for all three algorithms. The new stream-based algorithm StreaM_k, which counts the frequencies of k-vertex motifs, achieves speedups up to 19,043 x for synthetic and 2882 x for real-world datasets. We present a novel approach for the distributed processing of dynamic graphs, called parallel Dynamic Graph Analysis (pDNA). To analyze a dynamic graph, the work is distributed by a partitioner that creates subgraphs and assigns them to workers. They compute the properties of their respective subgraph using standard algorithms. Their results are used by the collator component to merge them to the properties of the original graph. We evaluate the performance of pDNA for the computation of five graph properties on two real-world dynamic graphs with up to 32 workers. Our approach achieves great speedups, especially for the analysis of complex graph measures. We introduce two novel approaches for the selection of efficient graph data structures. The compile-time approach estimates the workload of an analysis after an initial profiling phase and recommends efficient data structures based on benchmarking results. It achieves speedups of up to 5.4 x over baseline data structure configurations for the analysis of real-word dynamic graphs. The run-time approach monitors the workload during analysis and exchanges the graph representation if it finds a configuration that promises to be more efficient for the current workload. Compared to baseline configurations, it achieves speedups up to 7.3 x for the analysis of a synthetic workload. Our contributions provide novel approaches for the efficient analysis of dynamic graphs and tools to further investigate the trade-offs between different factors that influence the performance.:1 Introduction 2 Notation and Terminology 3 Related Work 4 DNA - Dynamic Network Analyzer 5 Algorithms 6 Parallel Dynamic Network Analysis 7 Selection of Efficient Graph Data Structures 8 Use Cases 9 Conclusion A DNA - Dynamic Network Analyzer B Algorithms C Selection of Efficient Graph Data Structures D Parallel Dynamic Network Analysis E Graph-based Intrusion Detection System F Molecular Dynamic

    Bowdoin Orient v.113, no.1-25 (1983-1984)

    Get PDF
    https://digitalcommons.bowdoin.edu/bowdoinorient-1980s/1004/thumbnail.jp

    Proceedings of the Fifth International Mobile Satellite Conference 1997

    Get PDF
    Satellite-based mobile communications systems provide voice and data communications to users over a vast geographic area. The users may communicate via mobile or hand-held terminals, which may also provide access to terrestrial communications services. While previous International Mobile Satellite Conferences have concentrated on technical advances and the increasing worldwide commercial activities, this conference focuses on the next generation of mobile satellite services. The approximately 80 papers included here cover sessions in the following areas: networking and protocols; code division multiple access technologies; demand, economics and technology issues; current and planned systems; propagation; terminal technology; modulation and coding advances; spacecraft technology; advanced systems; and applications and experiments

    Bimodal Audiovisual Perception in Interactive Application Systems of Moderate Complexity

    Get PDF
    The dissertation at hand deals with aspects of quality perception of interactive audiovisual application systems of moderate complexity as e.g. defined in the MPEG-4 standard. Because in these systems the available computing power is limited, it is decisive to know which factors influence the perceived quality. Only then can the available computing power be distributed in the most effective and efficient way for the simulation and display of audiovisual 3D scenes. Whereas quality factors for the unimodal auditory and visual stimuli are well known and respective models of perception have been successfully devised based on this knowledge, this is not true for bimodal audiovisual perception. For the latter, it is only known that some kind of interdependency between auditory and visual perception does exist. The exact mechanisms of human audiovisual perception have not been described. It is assumed that interaction with an application or scene has a major influence upon the perceived overall quality. The goal of this work was to devise a system capable of performing subjective audiovisual assessments in the given context in a largely automated way. By applying the system, first evidence regarding audiovisual interdependency and influence of interaction upon perception should be collected. Therefore this work was composed of three fields of activities: the creation of a test bench based on the available but (regarding the audio functionality) somewhat restricted MPEG-4 player, the preoccupation with methods and framework requirements that ensure comparability and reproducibility of audiovisual assessments and results, and the performance of a series of coordinated experiments including the analysis and interpretation of the collected data. An object-based modular audio rendering engine was co-designed and -implemented which allows to perform simple room-acoustic simulations based on the MPEG-4 scene description paradigm in real-time. Apart from the MPEG-4 player, the test bench consists of a haptic Input Device used by test subjects to enter their quality ratings and a logging tool that allows to journalize all relevant events during an assessment session. The collected data can be exported comfortably for further analysis using appropriate statistic tools. A thorough analysis of the well established test methods and recommendations for unimodal subjective assessments was performed to find out whether a transfer to the audiovisual bimodal case is easily possible. It became evident that - due to the limited knowledge about the underlying perceptual processes - a novel categorization of experiments according to their goals could be helpful to organize the research in the field. Furthermore, a number of influencing factors could be identified that exercise control over bimodal perception in the given context. By performing the perceptual experiments using the devised system, its functionality and ease of use was verified. Apart from that, some first indications for the role of interaction in perceived overall quality have been collected: interaction in the auditory modality reduces a human's ability of correctly rating the audio quality, whereas visually based (cross-modal) interaction does not necessarily generate this effect.Die vorliegende Dissertation beschäftigt sich mit Aspekten der Qualitätswahrnehmung von interaktiven audiovisuellen Anwendungssystemen moderater Komplexität, wie sie z.B. durch den MPEG-4 Standard definiert sind. Die Frage, welche Faktoren Einfluss auf die wahrgenommene Qualität von audiovisuellen Anwendungssystemen haben ist entscheidend dafür, wie die nur begrenzt zur Verfügung stehende Rechenleistung für die Echtzeit-Simulation von 3D Szenen und deren Darbietung sinnvoll verteilt werden soll. Während Qualitätsfaktoren für unimodale auditive als auch visuelle Stimuli seit langem bekannt sind und entsprechende Modelle existieren, müssen diese für die bimodale audiovisuelle Wahrnehmung noch hergeleitet werden. Dabei ist bekannt, dass eine Wechselwirkung zwischen auditiver und visueller Qualität besteht, nicht jedoch, wie die Mechanismen menschlicher audiovisueller Wahrnehmung genau arbeiten. Es wird auch angenommen, dass der Faktor Interaktion einen wesentlichen Einfluss auf wahrgenommene Qualität hat. Das Ziel dieser Arbeit war, ein System für die zeitsparende und weitgehend automatisierte Durchführung von subjektiven audiovisuellen Wahrnehmungstests im gegebenen Kontext zu erstellen und es für einige exemplarische Experimente einzusetzen, welche erste Aussagen über audiovisuelleWechselwirkungen und den Einfluss von Interaktion auf die Wahrnehmung erlauben sollten. Demzufolge gliederte sich die Arbeit in drei Aufgabenbereiche: die Erstellung eines geeigneten Testsystems auf der Grundlage eines vorhandenen, jedoch in seiner Audiofunktionalität noch eingeschränkten MPEG-4 Players, das Sicherstellen von Vergleichbarkeit und Wiederholbarkeit von audiovisuellen Wahrnehmungstests durch definierte Testmethoden und -bedingungen, und die eigentliche Durchführung der aufeinander abgestimmten Experimente mit anschlieÿender Auswertung und Interpretation der gewonnenen Daten. Dazu wurde eine objektbasierte, modulare Audio-Engine mitentworfen und -implementiert, welche basierend auf den Möglichkeiten der MPEG-4 Szenenbeschreibung alle Fähigkeiten zur Echtzeitberechnung von Raumakustik bietet. Innerhalb des entwickelten Testsystems kommuniziert der MPEG-4 Player mit einem hardwaregestützten Benutzerinterface zur Eingabe der Qualitätsbewertungen durch die Testpersonen. Sämtliche relevanten Ereignisse, die während einer Testsession auftreten, können mit Hilfe eines Logging-Tools aufgezeichnet und für die weitere Datenanalyse mit Statistikprogrammen exportiert werden. Eine Analyse der existierenden Testmethoden und -empfehlungen für unimodale Wahrnehmungstests sollte zeigen, ob deren Übertragung auf den audiovisuellen Fall möglich ist. Dabei wurde deutlich, dass bedingt durch die fehlende Kenntnis der zugrundeliegenden Wahrnehmungsprozesse zunächst eine Unterteilung nach den Zielen der durchgeführten Experimente sinnvoll erscheint. Weiterhin konnten Einflussfaktoren identifiziert werden, die die bimodale Wahrnehmung im gegebenen Kontext steuern. Bei der Durchführung der Wahrnehmungsexperimente wurde die Funktionsfähigkeit des erstellten Testsystems verifiziert. Darüber hinaus ergaben sich erste Anhaltspunkte für den Einfluss von Interaktion auf die wahrgenommene Gesamtqualität: Interaktion in der auditiven Modalität verringert die Fähigkeit, Audioqualität korrekt beurteilen zu können, während visuell gestützte Interaktion (cross-modal) diesen Effekt nicht zwingend generiert
    corecore