19 research outputs found

    On the acyclic disconnection and the girth

    Get PDF
    The acyclic disconnection, (omega) over right arrow (D), of a digraph D is the maximum number of connected components of the underlying graph of D - A(D*), where D* is an acyclic subdigraph of D. We prove that (omega) over right arrow (D) >= g - 1 for every strongly connected digraph with girth g >= 4, and we show that (omega) over right arrow (D) = g - 1 if and only if D congruent to C-g for g >= 5. We also characterize the digraphs that satisfy (omega) over right arrow (D) = g - 1, for g = 4 in certain classes of digraphs. Finally, we define a family of bipartite tournaments based on projective planes and we prove that their acyclic disconnection is equal to 3. Then, these bipartite tournaments are counterexamples of the conjecture (omega) over right arrow (T) = 3 if and only if T congruent to (C) over right arrow (4) posed for bipartite tournaments by Figueroa et al. (2012). (C) 2015 Elsevier B.V. All rights reserved.Peer ReviewedPostprint (author's final draft

    Bounds on the k-restricted arc connectivity of some bipartite tournaments

    Get PDF
    For k¿=¿2, a strongly connected digraph D is called -connected if it contains a set of arcs W such that contains at least k non-trivial strong components. The k-restricted arc connectivity of a digraph D was defined by Volkmann as . In this paper we bound for a family of bipartite tournaments T called projective bipartite tournaments. We also introduce a family of “good” bipartite oriented digraphs. For a good bipartite tournament T we prove that if the minimum degree of T is at least then where N is the order of the tournament. As a consequence, we derive better bounds for circulant bipartite tournaments.Peer ReviewedPostprint (author's final draft

    High-reliability architectures for networks under stress

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (p. 157-165).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.In this thesis, we develop a methodology for architecting high-reliability communication networks. Previous results in the network reliability field are mostly theoretical in nature with little immediate applicability to the design of real networks. We bring together these contributions and develop new results and insights which are of value in designing networks that meet prescribed levels of reliability. Furthermore, most existing results assume that component failures are statistically independent in nature. We take initial steps in developing a methodology for the design of networks with statistically dependent link failures. We also study the architectures of networks under extreme stress.by Guy E. Weichenberg.S.M

    Basic Neutrosophic Algebraic Structures and their Application to Fuzzy and Neutrosophic Models

    Get PDF
    The involvement of uncertainty of varying degrees when the total of the membership degree exceeds one or less than one, then the newer mathematical paradigm shift, Fuzzy Theory proves appropriate. For the past two or more decades, Fuzzy Theory has become the potent tool to study and analyze uncertainty involved in all problems. But, many real-world problems also abound with the concept of indeterminacy. In this book, the new, powerful tool of neutrosophy that deals with indeterminacy is utilized. Innovative neutrosophic models are described. The theory of neutrosophic graphs is introduced and applied to fuzzy and neutrosophic models. This book is organized into four chapters. In Chapter One we introduce some of the basic neutrosophic algebraic structures essential for the further development of the other chapters. Chapter Two recalls basic graph theory definitions and results which has interested us and for which we give the neutrosophic analogues. In this chapter we give the application of graphs in fuzzy models. An entire section is devoted for this purpose. Chapter Three introduces many new neutrosophic concepts in graphs and applies it to the case of neutrosophic cognitive maps and neutrosophic relational maps. The last section of this chapter clearly illustrates how the neutrosophic graphs are utilized in the neutrosophic models. The final chapter gives some problems about neutrosophic graphs which will make one understand this new subject.Comment: 149 pages, 130 figure

    An elementary proposition on the dynamic routing problem in wireless networks of sensors

    Get PDF
    The routing problem (finding an optimal route from one point in a computer network to another) is surrounded by impossibility results. These results are usually expressed as lower and upper bounds on the set of nodes (or the set of links) of a network and represent the complexity of a solution to the routing problem (a routing function). The routing problem dealt with here, in particular, is a dynamic one (it accounts for network dynamics) and concerns wireless networks of sensors. Sensors form wireless links of limited capacity and time-variable quality to route messages amongst themselves. It is desired that sensors self-organize ad hoc in order to successfully carry out a routing task, e.g. provide daily soil erosion reports for a monitored watershed, or provide immediate indications of an imminent volcanic eruption, in spite of network dynamics. Link dynamics are the first barrier to finding an optimal route between a node x and a node y in a sensor network. The uncertainty of the outcome (the best next hop) of a routing function lies partially with the quality fluctuations of wireless links. Take, for example, a static network. It is known that, given the set of nodes and their link weights (or costs), a node can compute optimal routes by running, say, Dijkstra's algorithm. Link dynamics however suggest that costs are not static. Hence, sensors need a metric (a measurable quantity of uncertainty) to monitor for fluctuations, either improvements or degradations of quality or load; when a fluctuation is sufficiently large (say, by Delta), sensors ought to update their costs and seek another route. Therein lies the other fundamental barrier to find an optimal route - complexity. A crude argument would suggest that sensors (and their links) have an upper bound on the number of messages they can transmit, receive and store due to resource constraints. Such messages can be application traffic, in which case it is desirable, or control traffic, in which case it should be kept minimal. The first type of traffic is demand, and a user should provision for it accordingly. The second type of traffic is overhead, and it is necessary if a routing system (or scheme) is to ensure its fidelity to the application requirements (policy). It is possible for a routing scheme to approximate optimal routes (by Delta) by reducing its message and/or memory complexity. The common denominator of the routing problem and the desire to minimize overhead while approximating optimal routes is Delta, the deviation (or stretch) of a computed route from an optimal one, as computed by a node that has instantaneous knowledge of the set of all nodes and their interaction costs (an oracle). This dissertation deals with both problems in unison. To do so, it needs to translate the policy space (the user objectives) into a metric space (routing objectives). It does so by means of a cost function that normalizes metrics into a number of hops. Then it proceeds to devise, design, and implement a scheme that computes minimum-hop-count routes with manageable complexity. The theory presented is founded on (well-ordered) sets with respect to an elementary proposition, that a route from a source x to a destination y can be computed either by y sending an advertisement to the set of all nodes, or by x sending a query to the set of all nodes; henceforth the proactive method (of y) and the reactive method (of x), respectively. The debate between proactive and reactive routing protocols appears in many instances of the routing problem (e.g. routing in mobile networks, routing in delay-tolerant networks, compact routing), and it is focussed on whether nodes should know a priori all routes and then select the best one (with the proactive method), or each node could simply search for a (hopefully best) route on demand (with the reactive method). The proactive method is stateful, as it requires the entire metric space - the set of nodes and their interaction costs - in memory (in a routing table). The routes computed by the proactive method are optimal and the lower and upper bounds of proactive schemes match those of an oracle. Any attempt to reduce the proactive overhead, e.g. by introducing hierarchies, will result in sub-optimal routes (of known stretch). The reactive method is stateless, as it requires no information whatsoever to compute a route. Reactive schemes - at least as they are presently understood - compute sub-optimal routes (and thus far, of unknown stretch). This dissertation attempts to answer the following question: "what is the least amount of state required to compute an optimal route from a source to a destination?" A hybrid routing scheme is used to investigate this question, one that uses the proactive method to compute routes to near destinations and the reactive method for distant destinations. It is shown that there are cases where hybrid schemes can converge to optimal routes, despite possessing incomplete routing state, and that the necessary and sufficient condition to compute optimal routes with local state alone is related neither to the size nor the density of a network; it is rather the circumference (the size of the largest cycle) of a network that matters. Counterexamples, where local state is insufficient, are discussed to derive the worst-case stretch. The theory is augmented with simulation results and a small experimental testbed to motivate the discussion on how policy space (user requirements) can translate into metric spaces and how different metrics affect performance. On the debate between proactive and reactive protocols, it is shown that the two classes are equivalent. The dissertation concludes with a discussion on the applicability of its results and poses some open problems

    Subject Index Volumes 1–200

    Get PDF

    29th International Symposium on Algorithms and Computation: ISAAC 2018, December 16-19, 2018, Jiaoxi, Yilan, Taiwan

    Get PDF

    Meta-stochastic simulation for systems and synthetic biology using classification

    Get PDF
    PhD ThesisTo comprehend the immense complexity that drives biological systems, it is necessary to generate hypotheses of system behaviour. This is because one can observe the results of a biological process and have knowledge of the molecular/genetic components, but not directly witness biochemical interaction mechanisms. Hypotheses can be tested in silico which is considerably cheaper and faster than “wet” lab trialand- error experimentation. Bio-systems are traditionally modelled using ordinary differential equations (ODEs). ODEs are generally suitable for the approximation of a (test tube sized) in vitro system trajectory, but cannot account for inherent system noise or discrete event behaviour. Most in vivo biochemical interactions occur within small spatially compartmentalised units commonly known as cells, which are prone to stochastic noise due to relatively low intracellular molecular populations. Stochastic simulation algorithms (SSAs) provide an exact mechanistic account of the temporal evolution of a bio-system, and can account for noise and discrete cellular transcription and signalling behaviour. Whilst this reaction-by-reaction account of system trajectory elucidates biological mechanisms more comprehensively than ODE execution, it comes at increased computational expense. Scaling to the demands of modern biology requires ever larger and more detailed models to be executed. Scientists evaluating and engineering tissue-scale and bacterial colony sized biosystems can be limited by the tractability of their computational hypothesis testing techniques. This thesis evaluates a hypothesised relationship between SSA computational performance and biochemical model characteristics. This relationship leads to the possibility of predicting the fastest SSA for an arbitrary model - a method that can provide computational headroom for more complex models to be executed. The research output of this thesis is realised as a software package for meta-stochastic simulation called ssapredict. Ssapredict uses statistical classification to predict SSA performance, and also provides high performance stochastic simulation implementations to the wider community.Newcastle University & University of Nottingham Computing Science department

    Seventh Biennial Report : June 2003 - March 2005

    No full text
    corecore