297 research outputs found

    Rigorous confidence intervals for critical probabilities

    Full text link
    We use the method of Balister, Bollobas and Walters to give rigorous 99.9999% confidence intervals for the critical probabilities for site and bond percolation on the 11 Archimedean lattices. In our computer calculations, the emphasis is on simplicity and ease of verification, rather than obtaining the best possible results. Nevertheless, we obtain intervals of width at most 0.0005 in all cases

    Performance Evaluation of Connectivity and Capacity of Dynamic Spectrum Access Networks

    Get PDF
    Recent measurements on radio spectrum usage have revealed the abundance of under- utilized bands of spectrum that belong to licensed users. This necessitated the paradigm shift from static to dynamic spectrum access (DSA) where secondary networks utilize unused spectrum holes in the licensed bands without causing interference to the licensed user. However, wide scale deployment of these networks have been hindered due to lack of knowledge of expected performance in realistic environments and lack of cost-effective solutions for implementing spectrum database systems. In this dissertation, we address some of the fundamental challenges on how to improve the performance of DSA networks in terms of connectivity and capacity. Apart from showing performance gains via simulation experiments, we designed, implemented, and deployed testbeds that achieve economics of scale. We start by introducing network connectivity models and show that the well-established disk model does not hold true for interference-limited networks. Thus, we characterize connectivity based on signal to interference and noise ratio (SINR) and show that not all the deployed secondary nodes necessarily contribute towards the network\u27s connectivity. We identify such nodes and show that even-though a node might be communication-visible it can still be connectivity-invisible. The invisibility of such nodes is modeled using the concept of Poisson thinning. The connectivity-visible nodes are combined with the coverage shrinkage to develop the concept of effective density which is used to characterize the con- nectivity. Further, we propose three techniques for connectivity maximization. We also show how traditional flooding techniques are not applicable under the SINR model and analyze the underlying causes for that. Moreover, we propose a modified version of probabilistic flooding that uses lower message overhead while accounting for the node outreach and in- terference. Next, we analyze the connectivity of multi-channel distributed networks and show how the invisibility that arises among the secondary nodes results in thinning which we characterize as channel abundance. We also capture the thinning that occurs due to the nodes\u27 interference. We study the effects of interference and channel abundance using Poisson thinning on the formation of a communication link between two nodes and also on the overall connectivity of the secondary network. As for the capacity, we derive the bounds on the maximum achievable capacity of a randomly deployed secondary network with finite number of nodes in the presence of primary users since finding the exact capacity involves solving an optimization problem that shows in-scalability both in time and search space dimensionality. We speed up the optimization by reducing the optimizer\u27s search space. Next, we characterize the QoS that secondary users can expect. We do so by using vector quantization to partition the QoS space into finite number of regions each of which is represented by one QoS index. We argue that any operating condition of the system can be mapped to one of the pre-computed QoS indices using a simple look-up in Olog (N) time thus avoiding any cumbersome computation for QoS evaluation. We implement the QoS space on an 8-bit microcontroller and show how the mathematically intensive operations can be computed in a shorter time. To demonstrate that there could be low cost solutions that scale, we present and implement an architecture that enables dynamic spectrum access for any type of network ranging from IoT to cellular. The three main components of this architecture are the RSSI sensing network, the DSA server, and the service engine. We use the concept of modular design in these components which allows transparency between them, scalability, and ease of maintenance and upgrade in a plug-n-play manner, without requiring any changes to the other components. Moreover, we provide a blueprint on how to use off-the-shelf commercially available software configurable RF chips to build low cost spectrum sensors. Using testbed experiments, we demonstrate the efficiency of the proposed architecture by comparing its performance to that of a legacy system. We show the benefits in terms of resilience to jamming, channel relinquishment on primary arrival, and best channel determination and allocation. We also show the performance gains in terms of frame error rater and spectral efficiency

    Combinatorics, Probability and Computing

    Get PDF
    One of the exciting phenomena in mathematics in recent years has been the widespread and surprisingly effective use of probabilistic methods in diverse areas. The probabilistic point of view has turned out to b

    Errata and Addenda to Mathematical Constants

    Full text link
    We humbly and briefly offer corrections and supplements to Mathematical Constants (2003) and Mathematical Constants II (2019), both published by Cambridge University Press. Comments are always welcome.Comment: 162 page

    Criticality and entanglement in random quantum systems

    Full text link
    We review studies of entanglement entropy in systems with quenched randomness, concentrating on universal behavior at strongly random quantum critical points. The disorder-averaged entanglement entropy provides insight into the quantum criticality of these systems and an understanding of their relationship to non-random ("pure") quantum criticality. The entanglement near many such critical points in one dimension shows a logarithmic divergence in subsystem size, similar to that in the pure case but with a different universal coefficient. Such universal coefficients are examples of universal critical amplitudes in a random system. Possible measurements are reviewed along with the one-particle entanglement scaling at certain Anderson localization transitions. We also comment briefly on higher dimensions and challenges for the future.Comment: Review article for the special issue "Entanglement entropy in extended systems" in J. Phys.

    Interface theory and percolation

    Get PDF
    This thesis is mainly concerned with percolation on general infinite graphs, as well as the approximation of conformal maps by square tilings, which are defined using electrical networks. The first chapter is concerned with the smoothness of the percolation density on various graphs. In particular, we prove that for Bernoulli percolation on Z d , d ≥ 2, the percolation density is an analytic function of the parameter in the supercritical interval (pc(Z d ), 1]. This answers a question of Kesten [1981]. The analogous result is also proved for the Boolean model of continuum percolation in R 2 , answering a question of Last et al. [2017]. In order to prove these results, we introduce the notion of interfaces, which is studied extensively in the current thesis. For dimensions d ≥ 3, we use renormalisation tecnhiques. Furthermore, we prove that the susceptibility is analytic in the subcritical interval for all transitive short- or long-range models, and that pc < 1/2 for bond percolation on certain families of triangulations for which Benjamini & Schramm conjectured that pc ≤ 1/2 for site percolation. For the latter result, we use the well-known circle packing theorem of He and Schramm [1995], a discrete analogue of the Riemann mapping theorem. In Chapter 2, we continue the study of interfaces, and in particular, we consider the exponential growth rate br of the number of interfaces of a given size as a function of their surface-to-volume ratio r. We prove that the values of the percolation parameter p for which the interface size distribution has an exponential tail are uniquely determined by br by comparison with a dimension-independent function f(r) := (1+r) 1+r r r . We also point out a formula for translating any upper bound on the percolation threshold of a lattice G into a lower bound on the exponential growth rate of lattice animals a(G) and vice-versa. We exploit this in both directions. We obtain the rigorous lower bound pc(Z 3 ) > 0.2522 for 3-dimensional site percolation. We also improve on the best known asymptotic lower and upper bounds on a(Z d ) as d → ∞. We also prove that the rate of the exponential decay of the cluster size distribution, defined as c(p) := limn→∞ (Pp(|Co| = n))1/n, is a continuous function of p. The proof makes use of the Arzel`a-Ascoli theorem but otherwise boils down to elementary calculations. The analogous statement is also proved for the interface size distribution. For this we first establish that the rate of exponential decay is well-defined. In Chapter 3, we use interfaces to obtain upper bounds for the site percolation threshold of plane graphs with given minimum degree conditions. The results of this chapter are inspired by well-known conjectures of Benjamini and Schramm [1996b] for percolation on general graphs. We prove a conjecture by Benjamini and Schramm [1996b] stating that plane graphs of minimum degree at least 7 have site percolation threshold bounded away from 1/2. We also make progress on a conjecture of Angel et al. [2018] that the critical probability is at most 1/2 for plane triangulations of minimum degree 6. In the process, we prove tight new isoperimetric bounds for certain classes of hyperbolic graphs. This establishes the vertex isoperimetric constant for all triangular and square hyperbolic lattices, answering a question of [Lyons and Peres, 2016, Question 6.20]. Another topic of this thesis is the discrete approximation of conformal maps using another discrete analogue of the Riemann mapping theorem, namely the square tilings of Brooks et al. [1940]. This result is analogous to a well-known the orem of Rodin & Sullivan, previously conjectured by Thurston, which states that the circle packing of the intersection of a lattice with a simply connected planar domain Ω into the unit disc D converges to a Riemann map from Ω to D when the mesh size converges to 0. As a result, we obtain a new algorithm that allows us to numerically compute the Riemann map from any Jordan domain onto a square

    Towards practical linear optical quantum computing

    Get PDF
    Quantum computing promises a new paradigm of computation where information is processed in a way that has no classical analogue. There are a number of physical platforms conducive to quantum computation, each with a number of advantages and challenges. Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. Their low decoherence rates make them particularly favourable, however the inability to perform deterministic two-qubit gates and the issue of photon loss are challenges that need to be overcome. In this thesis we explore the construction of a linear optical quantum computer based on the cluster state model. We identify the different necessary stages: state preparation, cluster state construction and implementation of quantum error correcting codes, and address the challenges that arise in each of these stages. For the state preparation, we propose a series of linear optical circuits for the generation of small entangled states, assessing their performance under different scenarios. For the cluster state construction, we introduce a ballistic scheme which not only consumes an order of magnitude fewer resources than previously proposed schemes, but also benefits from a natural loss tolerance. Based on this scheme, we propose a full architectural blueprint with fixed physical depth. We make investigations into the resource efficiency of this architecture and propose a new multiplexing scheme which optimises the use of resources. Finally, we study the integration of quantum error-correcting codes in the linear optical scheme proposed and suggest three ways in which the linear optical scheme can be made fault-tolerant.Open Acces
    • …
    corecore