3,899 research outputs found

    Low-Latency Millimeter-Wave Communications: Traffic Dispersion or Network Densification?

    Full text link
    This paper investigates two strategies to reduce the communication delay in future wireless networks: traffic dispersion and network densification. A hybrid scheme that combines these two strategies is also considered. The probabilistic delay and effective capacity are used to evaluate performance. For probabilistic delay, the violation probability of delay, i.e., the probability that the delay exceeds a given tolerance level, is characterized in terms of upper bounds, which are derived by applying stochastic network calculus theory. In addition, to characterize the maximum affordable arrival traffic for mmWave systems, the effective capacity, i.e., the service capability with a given quality-of-service (QoS) requirement, is studied. The derived bounds on the probabilistic delay and effective capacity are validated through simulations. These numerical results show that, for a given average system gain, traffic dispersion, network densification, and the hybrid scheme exhibit different potentials to reduce the end-to-end communication delay. For instance, traffic dispersion outperforms network densification, given high average system gain and arrival rate, while it could be the worst option, otherwise. Furthermore, it is revealed that, increasing the number of independent paths and/or relay density is always beneficial, while the performance gain is related to the arrival rate and average system gain, jointly. Therefore, a proper transmission scheme should be selected to optimize the delay performance, according to the given conditions on arrival traffic and system service capability

    Optimal Paths on the Space-Time SINR Random Graph

    Full text link
    We analyze a class of Signal-to-Interference-and-Noise-Ratio (SINR) random graphs. These random graphs arise in the modeling packet transmissions in wireless networks. In contrast to previous studies on the SINR graphs, we consider both a space and a time dimension. The spatial aspect originates from the random locations of the network nodes in the Euclidean plane. The time aspect stems from the random transmission policy followed by each network node and from the time variations of the wireless channel characteristics. The combination of these random space and time aspects leads to fluctuations of the SINR experienced by the wireless channels, which in turn determine the progression of packets in space and time in such a network. This paper studies optimal paths in such wireless networks in terms of first passage percolation on this random graph. We establish both "positive" and "negative" results on the associated time constant. The latter determines the asymptotics of the minimum delay required by a packet to progress from a source node to a destination node when the Euclidean distance between the two tends to infinity. The main negative result states that this time constant is infinite on the random graph associated with a Poisson point process under natural assumptions on the wireless channels. The main positive result states that when adding a periodic node infrastructure of arbitrarily small intensity to the Poisson point process, the time constant is positive and finite

    On the Quality of Wireless Network Connectivity

    Full text link
    Despite intensive research in the area of network connectivity, there is an important category of problems that remain unsolved: how to measure the quality of connectivity of a wireless multi-hop network which has a realistic number of nodes, not necessarily large enough to warrant the use of asymptotic analysis, and has unreliable connections, reflecting the inherent unreliable characteristics of wireless communications? The quality of connectivity measures how easily and reliably a packet sent by a node can reach another node. It complements the use of \emph{capacity} to measure the quality of a network in saturated traffic scenarios and provides a native measure of the quality of (end-to-end) network connections. In this paper, we explore the use of probabilistic connectivity matrix as a possible tool to measure the quality of network connectivity. Some interesting properties of the probabilistic connectivity matrix and their connections to the quality of connectivity are demonstrated. We argue that the largest eigenvalue of the probabilistic connectivity matrix can serve as a good measure of the quality of network connectivity.Comment: submitted to IEEE INFOCOM 201

    Performance Evaluation of Connectivity and Capacity of Dynamic Spectrum Access Networks

    Get PDF
    Recent measurements on radio spectrum usage have revealed the abundance of under- utilized bands of spectrum that belong to licensed users. This necessitated the paradigm shift from static to dynamic spectrum access (DSA) where secondary networks utilize unused spectrum holes in the licensed bands without causing interference to the licensed user. However, wide scale deployment of these networks have been hindered due to lack of knowledge of expected performance in realistic environments and lack of cost-effective solutions for implementing spectrum database systems. In this dissertation, we address some of the fundamental challenges on how to improve the performance of DSA networks in terms of connectivity and capacity. Apart from showing performance gains via simulation experiments, we designed, implemented, and deployed testbeds that achieve economics of scale. We start by introducing network connectivity models and show that the well-established disk model does not hold true for interference-limited networks. Thus, we characterize connectivity based on signal to interference and noise ratio (SINR) and show that not all the deployed secondary nodes necessarily contribute towards the network\u27s connectivity. We identify such nodes and show that even-though a node might be communication-visible it can still be connectivity-invisible. The invisibility of such nodes is modeled using the concept of Poisson thinning. The connectivity-visible nodes are combined with the coverage shrinkage to develop the concept of effective density which is used to characterize the con- nectivity. Further, we propose three techniques for connectivity maximization. We also show how traditional flooding techniques are not applicable under the SINR model and analyze the underlying causes for that. Moreover, we propose a modified version of probabilistic flooding that uses lower message overhead while accounting for the node outreach and in- terference. Next, we analyze the connectivity of multi-channel distributed networks and show how the invisibility that arises among the secondary nodes results in thinning which we characterize as channel abundance. We also capture the thinning that occurs due to the nodes\u27 interference. We study the effects of interference and channel abundance using Poisson thinning on the formation of a communication link between two nodes and also on the overall connectivity of the secondary network. As for the capacity, we derive the bounds on the maximum achievable capacity of a randomly deployed secondary network with finite number of nodes in the presence of primary users since finding the exact capacity involves solving an optimization problem that shows in-scalability both in time and search space dimensionality. We speed up the optimization by reducing the optimizer\u27s search space. Next, we characterize the QoS that secondary users can expect. We do so by using vector quantization to partition the QoS space into finite number of regions each of which is represented by one QoS index. We argue that any operating condition of the system can be mapped to one of the pre-computed QoS indices using a simple look-up in Olog (N) time thus avoiding any cumbersome computation for QoS evaluation. We implement the QoS space on an 8-bit microcontroller and show how the mathematically intensive operations can be computed in a shorter time. To demonstrate that there could be low cost solutions that scale, we present and implement an architecture that enables dynamic spectrum access for any type of network ranging from IoT to cellular. The three main components of this architecture are the RSSI sensing network, the DSA server, and the service engine. We use the concept of modular design in these components which allows transparency between them, scalability, and ease of maintenance and upgrade in a plug-n-play manner, without requiring any changes to the other components. Moreover, we provide a blueprint on how to use off-the-shelf commercially available software configurable RF chips to build low cost spectrum sensors. Using testbed experiments, we demonstrate the efficiency of the proposed architecture by comparing its performance to that of a legacy system. We show the benefits in terms of resilience to jamming, channel relinquishment on primary arrival, and best channel determination and allocation. We also show the performance gains in terms of frame error rater and spectral efficiency

    On the Catalyzing Effect of Randomness on the Per-Flow Throughput in Wireless Networks

    Get PDF
    This paper investigates the throughput capacity of a flow crossing a multi-hop wireless network, whose geometry is characterized by general randomness laws including Uniform, Poisson, Heavy-Tailed distributions for both the nodes' densities and the number of hops. The key contribution is to demonstrate \textit{how} the \textit{per-flow throughput} depends on the distribution of 1) the number of nodes NjN_j inside hops' interference sets, 2) the number of hops KK, and 3) the degree of spatial correlations. The randomness in both NjN_j's and KK is advantageous, i.e., it can yield larger scalings (as large as Θ(n)\Theta(n)) than in non-random settings. An interesting consequence is that the per-flow capacity can exhibit the opposite behavior to the network capacity, which was shown to suffer from a logarithmic decrease in the presence of randomness. In turn, spatial correlations along the end-to-end path are detrimental by a logarithmic term

    Modelling Probabilistic Wireless Networks

    Full text link
    We propose a process calculus to model high level wireless systems, where the topology of a network is described by a digraph. The calculus enjoys features which are proper of wireless networks, namely broadcast communication and probabilistic behaviour. We first focus on the problem of composing wireless networks, then we present a compositional theory based on a probabilistic generalisation of the well known may-testing and must-testing pre- orders. Also, we define an extensional semantics for our calculus, which will be used to define both simulation and deadlock simulation preorders for wireless networks. We prove that our simulation preorder is sound with respect to the may-testing preorder; similarly, the deadlock simulation pre- order is sound with respect to the must-testing preorder, for a large class of networks. We also provide a counterexample showing that completeness of the simulation preorder, with respect to the may testing one, does not hold. We conclude the paper with an application of our theory to probabilistic routing protocols

    Routing efficiency in wireless sensor-actor networks considering semi-automated architecture

    Get PDF
    Wireless networks have become increasingly popular and advances in wireless communications and electronics have enabled the development of different kind of networks such as Mobile Ad-hoc Networks (MANETs), Wireless Sensor Networks (WSNs) and Wireless Sensor-Actor Networks (WSANs). These networks have different kind of characteristics, therefore new protocols that fit their features should be developed. We have developed a simulation system to test MANETs, WSNs and WSANs. In this paper, we consider the performance behavior of two protocols: AODV and DSR using TwoRayGround model and Shadowing model for lattice and random topologies. We study the routing efficiency and compare the performance of two protocols for different scenarios. By computer simulations, we found that for large number of nodes when we used TwoRayGround model and random topology, the DSR protocol has a better performance. However, when the transmission rate is higher, the routing efficiency parameter is unstable.Peer ReviewedPostprint (published version
    corecore