11 research outputs found

    Enabling dense spatial reuse in millimeter-wave networks

    Get PDF
    Millimeter wave (mmWave) networks can deliver multi-Gbps wireless links that use extremely narrow directional beams. This provides us with a new opportunity to exploit spatial reuse in order to scale network throughput. Exploiting such spatial reuse, however, requires aligning the beams of all nodes in a network. Aligning the beams is a difficult process which is complicated by indoor multipath, which can create interference, as well as by the inefficiency of carrier sense at detecting interference in directional links. This thesis presents BounceNet, the first many-to many millimeter wave beam alignment protocol that can exploit dense spatial reuse to allow many links to operate in parallel in a confined space and scale the wireless throughput with the number of clients. Results from three millimeter wave testbeds show that BounceNet can scale the throughput with the number of clients to deliver a total network data rate of more than 39 Gbps for 10 clients, which is up to 6.6X higher than current 802.11 mmWave standards

    An Experimental Study of the Treewidth of Real-World Graph Data

    Get PDF
    Treewidth is a parameter that measures how tree-like a relational instance is, and whether it can reasonably be decomposed into a tree. Many computation tasks are known to be tractable on databases of small treewidth, but computing the treewidth of a given instance is intractable. This article is the first large-scale experimental study of treewidth and tree decompositions of real-world database instances (25 datasets from 8 different domains, with sizes ranging from a few thousand to a few million vertices). The goal is to determine which data, if any, can benefit of the wealth of algorithms for databases of small treewidth. For each dataset, we obtain upper and lower bound estimations of their treewidth, and study the properties of their tree decompositions. We show in particular that, even when treewidth is high, using partial tree decompositions can result in data structures that can assist algorithms

    An Experimental Study of the Treewidth of Real-World Graph Data (Extended Version)

    No full text
    Extended version of an article published in the proceedings of ICDT 2019Treewidth is a parameter that measures how tree-like a relational instance is, and whether it can reasonably be decomposed into a tree. Many computation tasks are known to be tractable on databases of small treewidth, but computing the treewidth of a given instance is intractable. This article is the first large-scale experimental study of treewidth and tree decompositions of real-world database instances (25 datasets from 8 different domains, with sizes ranging from a few thousand to a few million vertices). The goal is to determine which data, if any, can benefit of the wealth of algorithms for databases of small treewidth. For each dataset, we obtain upper and lower bound estimations of their treewidth, and study the properties of their tree decompositions. We show in particular that, even when treewidth is high, using partial tree decompositions can result in data structures that can assist algorithms

    Partial resource pooling in loss systems with two players

    No full text
    International audienceIt is well known that resource sharing or resource pooling can help improve resource utilisation ofindependent entities providing similar for using similar resources. In traditional sharing mechanisms, entities pool either all their resources or do not participate. Binary mechanisms are helpful when the objective is to maximize the system (or overall) utility. However, when each entity is maximizing its own utility, these mechanisms are not always successful at prescribing solutions that are acceptable to all the participating entities.In this paper, we propose two partial sharing models in which both providers improve their individual utilities byfollowing the prescribed sharing strategy. By partial sharing we mean that each provider possibly pools in only a part of itsresources. The providers are modelled as M/M/N/N queues, with different number of servers of each provider., and a provider may pool any number of its servers. Our partial sharing models encompasses the two extreme cases of full pooling and no pooling that appear in traditional mechanisms

    Sharing Within Limits: Partial Resource Pooling in Loss Systems

    No full text
    International audienceFragmentation of expensive resources, e.g., spectrum for wireless services, between providers can introduce inefficiencies in resource utilisation and worsen overall system performance. In such cases, resource pooling between independent service providers can be used to improve performance. However, for providers to agree to pool their resources, the arrangement has to be mutually beneficial. The traditional notion of resource pooling, which implies complete sharing, need not have this property. For example, under full pooling, one of the providers may be worse off and hence have no incentive to participate. In this paper, we propose partial resource sharing models as a generalization of full pooling, which can be configured to be beneficial to all participants. We formally define and analyze two partial sharing models between two service providers, each of which is an Erlang-B loss system with the blocking probabilities as the performance measure. We show that there always exist partial sharing configurations that are beneficial to both providers, irrespective of the load and the number of circuits of each of the providers. A key result is that the Pareto frontier has at least one of the providers sharing all its resources with the other. Furthermore, full pooling may not lie inside this Pareto set. The choice of the sharing configurations within the Pareto set is formalized based on bargaining theory. Finally, large system approximations of the blocking probabilities in the quality-efficiency-driven regime are presented

    Partial resource pooling in loss systems with two players

    No full text
    International audienceIt is well known that resource sharing or resource pooling can help improve resource utilisation ofindependent entities providing similar for using similar resources. In traditional sharing mechanisms, entities pool either all their resources or do not participate. Binary mechanisms are helpful when the objective is to maximize the system (or overall) utility. However, when each entity is maximizing its own utility, these mechanisms are not always successful at prescribing solutions that are acceptable to all the participating entities.In this paper, we propose two partial sharing models in which both providers improve their individual utilities byfollowing the prescribed sharing strategy. By partial sharing we mean that each provider possibly pools in only a part of itsresources. The providers are modelled as M/M/N/N queues, with different number of servers of each provider., and a provider may pool any number of its servers. Our partial sharing models encompasses the two extreme cases of full pooling and no pooling that appear in traditional mechanisms

    One protocol to rule them all: Wireless network-on-chip using deep reinforcement learning

    Get PDF
    Wireless Network-on-Chip (NoC) has emerged as a promising solution to scale chip multi-core processors to hundreds and thousands of cores. The broadcast nature of a wireless network allows it to significantly reduce the latency and overhead of many-to-many multicast and broadcast communication on NoC processors. Unfortunately, the traffic patterns on wireless NoCs tend to be very dynamic and can change drastically across different cores, different time intervals and different applications. New medium access protocols that can learn and adapt to the highly dynamic traffic in wireless NoCs are needed to ensure low latency and efficient network utilization. Towards this goal, we present NeuMAC, a unified approach that combines networking, architecture and deep learning to generate highly adaptive medium access protocols for wireless NoC architectures. NeuMAC leverages a deep reinforcement learning framework to create new policies that can learn the structure, correlations, and statistics of the traffic patterns and adapt quickly to optimize performance. Our results show that NeuMAC can quickly adapt to NoC traffic to provide significant gains in terms of latency, throughput, and overall execution time. In particular, for applications with highly dynamic traffic patterns, NeuMAC can speed up the execution time by 1.37Ă— -3.74Ă— as compared to 6 baselines.The work is funded in part by NSF Award 1750725.Peer ReviewedPostprint (published version

    Millimeter wave wireless network on chip using deep reinforcement learning

    Get PDF
    Wireless Network-on-Chip (NoC) has emerged as a promising solution to scale chip multi-core processors to hundreds of cores. However, traditional medium access protocols fall short here since the traffic patterns on wireless NoCs tend to be very dynamic and can change drastically across different cores, different time intervals and different applications. In this work, we present NeuMAC, a unified approach that combines networking, architecture and AI to generate highly adaptive medium access protocols that can learn and optimize for the structure, correlations and statistics of the traffic patterns on the NoC. Our results show that NeuMAC can quickly adapt to NoC traffic to provide significant gains in terms of latency and overall execution time, improving the execution time by up to 1.69Ă— - 3.74Ă—.Peer ReviewedPostprint (author's final draft
    corecore