283 research outputs found

    Reliability of Partial k-tree Networks

    Get PDF
    133 pagesRecent developments in graph theory have shown the importance of the class of partial k- trees. This large class of graphs admits several algorithm design methodologies that render efficient solutions for a large number of problems inherently difficult for general graphs. In this thesis we develop such algorithms to solve a variety of reliability problems on partial k-tree networks with node and edge failures. We also investigate the problem of designing uniformly optimal 2-trees with respect to the 2-terminal reliability measure. We model a. communication network as a graph in which nodes represent communication sites and edges represent bidirectional communication lines. Each component (node or edge) has an associated probability of operation. Components of the network are in either operational or failed state and their failures are statistically independent. Under this model, the reliability of a network G is defined as the probability that a given connectivity condition holds. The l-terminal reliability of G, Rel1 ( G), is the probability that any two of a given set of I nodes of G can communicate. Robustness of a network to withstand failures can be expressed through network resilience, Res( G), which is the expected number of distinct pairs of nodes that can communicate. Computing these and other similarly defined measures is #P-hard for general networks. We use a dynamic programming paradigm to design linear time algorithms that compute Rel1( G), Res( G), and some other reliability and resilience measures of a partial k-tree network given with an embedding in a k-tree (for a fixed k). Reliability problems on directed networks are also inherently difficult. We present efficient algorithms for directed versions of typical reliability and resilience problems restricted to partial k-tree networks without node failures. Then we reduce to those reliability problems allowing both node and edge failures. Finally, we study 2-terminal reliability aspects of 2-trees. We characterize uniformly optimal 2-trees, 2-paths, and 2-caterpillars with respect to Rel2 and identify local graph operations that improve the 2-terminal reliability of 2-tree networks

    Pseudo-random graphs

    Full text link
    Random graphs have proven to be one of the most important and fruitful concepts in modern Combinatorics and Theoretical Computer Science. Besides being a fascinating study subject for their own sake, they serve as essential instruments in proving an enormous number of combinatorial statements, making their role quite hard to overestimate. Their tremendous success serves as a natural motivation for the following very general and deep informal questions: what are the essential properties of random graphs? How can one tell when a given graph behaves like a random graph? How to create deterministically graphs that look random-like? This leads us to a concept of pseudo-random graphs and the aim of this survey is to provide a systematic treatment of this concept.Comment: 50 page

    Traffic Network Control from Temporal Logic Specifications

    Get PDF
    We propose a framework for generating a signal control policy for a traffic network of signalized intersections to accomplish control objectives expressible using linear temporal logic. By applying techniques from model checking and formal methods, we obtain a correct-by-construction controller that is guaranteed to satisfy complex specifications. To apply these tools, we identify and exploit structural properties particular to traffic networks that allow for efficient computation of a finite state abstraction. In particular, traffic networks exhibit a componentwise monotonicity property which allows reach set computations that scale linearly with the dimension of the continuous state space

    High-reliability architectures for networks under stress

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (p. 157-165).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.In this thesis, we develop a methodology for architecting high-reliability communication networks. Previous results in the network reliability field are mostly theoretical in nature with little immediate applicability to the design of real networks. We bring together these contributions and develop new results and insights which are of value in designing networks that meet prescribed levels of reliability. Furthermore, most existing results assume that component failures are statistically independent in nature. We take initial steps in developing a methodology for the design of networks with statistically dependent link failures. We also study the architectures of networks under extreme stress.by Guy E. Weichenberg.S.M

    Topological optimization of fault-tolerant networks meeting reliability constraints.

    Get PDF
    The relevant entities in a network are its nodes, and the links between them. In general, the goal is to achieve a reliable communication between dierent pairs of nodes. Examples of applications are telephonic services, data communication, transportation systems, computer systems, electric networks and control systems. The predominant criterion for the design of a reliable and survivable system is the minimum-cost in most contexts. An attractive topic for research is to consider a minimum-cost topological optimization design meeting a reliability threshold. Even though the cost has been the primary factor in the network design, recently, the network reliability has grown in relevance. With the progress of Fiber-To-the-Home (FTTH) services for the backbone design in most current networks, combined with the rapid development of network communication technologies, and the explosive increase of applications over the Internet infrastructure, the network reliability has supreme importance, for traditional communication systems but for the defense, business and energy, and emergent elds such as trusted computing, cloud computing, Internet of Things (IoT) and Next Generation Networks (NGN), the fault tolerance is critical. We can distinguish two main problems to address in the analysis and design of network topologies. First, the robustness is usually met under multi-path generation. Therefore, we require certain number of node-disjoint paths between distinguished nodes, called terminals. The second problem is to meet a minimum-reliability requirement in a hostile environment, using the fact that both nodes and links may fail. Both problems are strongly related, where sometimes the minimum-cost topology already meets the reliability threshold, or it should be discarded, and the design is challenging. This thesis deals with a topological optimization problem meeting reliability constraints. The Generalized Steiner Problem with Node-Connectivity Constraints and Hostile Reliability (GSP-NCHR) is introduced, and it is an extension of the well-known Generalized Steiner Problem (GSP). Since GSP-NCHR subsumes the GSP, it belongs to the class of N P-Hard problems. A full chapter is dedicated to the hardness of the GSP-NCHR, and an analysis of particular sub-problems. Here, the GSP-NCHR is addressed approximately. Our goal is to meet the topological x requirements intrinsically considered in the GSP-NCHR, and then test if the resulting topology meets a minimum reliability constraint. As a consequence a hybrid heuristic is proposed, that considers a Greedy Randomized construction phase followed by a Variable Neighborhood Search (VNS) in a second phase. VNS is a powerful method that combines local searches that consider dierent neighborhood structures, and it was used to provide good solutions in several hard combinatorial optimization problems. Since the reliability evaluation in the hostile model belongs to the class of N P-Hard problems, a pointwise reliability estimation was adopted. Here we considered Recursive Variance Reduction method (RVR), since an exact reliability evaluation is prohibitive for large-sized networks. The experimental analysis was carried out on a wide family of instances adapted from travel salesman problem library (TSPLIB), for heterogeneous networks with dierent characteristics and topologies, including up to 400 nodes. The numerical results show acceptable CPU-times and locally-optimum solutions with good quality, meeting network reliability constraints as well.En una red las entidades relevantes son nodos y conexiones entre nodos, y en general el principal objetivo buscado es lograr una comunicación segura entre nodos de esta red, ya sea para redes telefónicas y de comunicación de datos, de transporte, arquitectura de computadores, redes de energía eléctrica o sistemas de comando y control. La optimización relativa al costo de una red y la contabilidad de la misma, relacionada con la supervivencia de esta, son los criterios predominantes en la selección de una solución para la mayor parte de los contextos. Un tema interesante que ha atraído un gran esfuerzo es cómo diseñar topologías de red, con un uso mínimo de recursos de red en términos de costo que brinde una garantía de contabilidad. A pesar que por años el costo ha sido el factor primario, la contabilidad ha ganado rápidamente en relevancia. Con sistemas de transmisión de fibra óptica de alta capacidad formando la columna vertebral de la mayoría de las redes actuales y junto con el rápido desarrollo de la tecnología de comunicación de redes y el crecimiento explosivo de las aplicaciones de Internet, la contabilidad de la red parece cada vez más importante, tanto para áreas tradicionales como la industria de defensa, finanzas y energía, y áreas emergentes como la computación contable, la computación en la nube, internet de las cosas (IoT) y la próxima generación de Internet, la supervivencia del tráfico por sobre los fallos de red se ha convertido aún en más crítica. En ese sentido podemos diferenciar, a grandes rasgos, dos de los principales problemas a resolver en el análisis y diseño de topologías de red. Primeramente la obtención de una red óptima en algún sentido, siendo este definido por ejemplo mediante la obtención de la máxima cantidad posible de caminos disjuntos entre pares de nodos, esto sujeto a determinadas restricciones definidas según el contexto. El segundo problema es la evaluación de la contabilidad de la red en función de las contabilidades elementales de los nodos y conexiones entre nodos que componen la red. Estas contabilidades elementales son probabilidades de operación asociadas a los nodos y conexiones entre nodos. Ambos problemas están fuertemente relacionados, pudiendo tener que comparar en el proceso de búsqueda de redes óptimas la contabilidad entre soluciones candidatas, o luego de obtener una solución candidata tener que evaluar la contabilidad de la misma y de esta forma descartarla o no. El presente trabajo se centra en la resolución del problema enfocado en ambos puntos planteados. Para ello modelamos el problema de diseño de la topología de red sobre la base de un modelo de nido como Generalized Steiner Problem with Node-Connectivity Constraints and Hostile Reliability (GSP-NCHR) extensión del más conocido Generalized Steiner Problem (GSP). El presente problema es NP-duro, dedicamos un capítulo para presentar resultados teóricos que lo demuestran. Nuestro objetivo es atacar de forma aproximada el modelo GSP-NCHR de tal modo de poder resolver la optimización de la red y luego medir la contabilidad de la solución obtenida. Para ello optamos por desarrollar la metaheurística Variable Neighborhood Search (VNS). VNS es un método potente que combina el uso de búsquedas locales basadas en distintas definiciones de vecindad, el cual ha sido utilizado para obtener soluciones de buena calidad en distintos problemas de optimización combinatoria. En lo referente al cálculo de contabilidad de la red, nuestro modelo GSP-NCHR pertenece a la clase NP-duro, por eso desarrollamos Recursive Variance Reduction (RVR) como método de simulación, ya que la evaluación exacta de esta medida para redes de tamaño considerable es impracticable. Las pruebas experimentales fueron realizadas utilizando un conjunto amplio de casos de prueba adaptados de la librería travel salesman problem (TSPLIB), de heterogéneas topologías con diferentes características, incluyendo instancias de hasta 400 nodos. Los resultados obtenidos indican tiempos de cómputo altamente aceptables acompañados de óptimos locales de buena calidad

    A survey of distributed data aggregation algorithms

    Get PDF
    Distributed data aggregation is an important task, allowing the decentralized determination of meaningful global properties, which can then be used to direct the execution of other applications. The resulting values are derived by the distributed computation of functions like COUNT, SUM, and AVERAGE. Some application examples deal with the determination of the network size, total storage capacity, average load, majorities and many others. In the last decade, many different approaches have been proposed, with different trade-offs in terms of accuracy, reliability, message and time complexity. Due to the considerable amount and variety of aggregation algorithms, it can be difficult and time consuming to determine which techniques will be more appropriate to use in specific settings, justifying the existence of a survey to aid in this task. This work reviews the state of the art on distributed data aggregation algorithms, providing three main contributions. First, it formally defines the concept of aggregation, characterizing the different types of aggregation functions. Second, it succinctly describes the main aggregation techniques, organizing them in a taxonomy. Finally, it provides some guidelines toward the selection and use of the most relevant techniques, summarizing their principal characteristics.info:eu-repo/semantics/publishedVersio

    Better predictions when models are wrong or underspecified

    Get PDF
    Many statistical methods rely on models of reality in order to learn from data and to make predictions about future data. By necessity, these models usually do not match reality exactly, but are either wrong (none of the hypotheses in the model provides an accurate description of reality) or underspecified (the hypotheses in the model describe only part of the data). In this thesis, we discuss three scenarios involving models that are wrong or underspecified. In each case, we find that standard statistical methods may fail, sometimes dramatically, and present different methods that continue to perform well even if the models are wrong or underspecified. The first two of these scenarios involve regression problems and investigate AIC (Akaike's Information Criterion) and Bayesian statistics. The third scenario has the famous Monty Hall problem as a special case, and considers the question how we can update our belief about an unknown outcome given new evidence when the precise relation between outcome and evidence is unknown.UBL - phd migration 201
    corecore