39 research outputs found

    Delay Optimal Server Assignment to Symmetric Parallel Queues with Random Connectivities

    Full text link
    In this paper, we investigate the problem of assignment of KK identical servers to a set of NN parallel queues in a time slotted queueing system. The connectivity of each queue to each server is randomly changing with time; each server can serve at most one queue and each queue can be served by at most one server per time slot. Such queueing systems were widely applied in modeling the scheduling (or resource allocation) problem in wireless networks. It has been previously proven that Maximum Weighted Matching (MWM) is a throughput optimal server assignment policy for such queueing systems. In this paper, we prove that for a symmetric system with i.i.d. Bernoulli packet arrivals and connectivities, MWM minimizes, in stochastic ordering sense, a broad range of cost functions of the queue lengths including total queue occupancy (or equivalently average queueing delay).Comment: 6 pages, 4 figures, Proc. IEEE CDC-ECC 201

    Efficient Load Balancing for Cloud Computing by Using Content Analysis

    Get PDF
    Nowadays, computer networks have grown rapidly due to the demand for information technology management and facilitation of greater functionality. The service provided based on a single machine cannot accommodate large databases. Therefore, single servers must be combined for server group services. The problem in grouping server service is that it is very hard to manage many devices which have different hardware. Cloud computing is an extensive scalable computing infrastructure that shares existing resources. It is a popular option for people and businesses for a number of reasons including cost savings and security. This paper aimed to propose an efficient technique of load balance control by using HA Proxy in cloud computing with the objective of receiving and distributing the workload to the computer server to share the processing resources. The proposed technique applied round-robin scheduling for an efficient resource management of the cloud storage systems that focused on an effective workload balancing and a dynamic replication strategy. The evaluation approach was based on the benchmark data from requests per second and failed requests. The results showed that the proposed technique could improve performance of load balancing by 1,000 request /6.31 sec in cloud computing and generate fewer false alarm

    FireDeX: a Prioritized IoT Data Exchange Middleware for Emergency Response

    Get PDF
    International audienceReal-time event detection and targeted decision making for emerging mission-critical applications, e.g. smart fire fighting, requires systems that extract and process relevant data from connected IoT devices in the environment. In this paper, we propose FireDeX, a cross-layer middleware that facilitates timely and effective exchange of data for coordinating emergency response activities. FireDeX adopts a publish-subscribe data exchange paradigm with brokers at the network edge to manage prioritized delivery of mission-critical data from IoT sources to relevant subscribers. It incorporates parameters at the application, network, and middleware layers into a data exchange service that accurately estimates end-to-end performance metrics (e.g. delays, success rates). We design an extensible queueing theoretic model that abstracts these cross-layer interactions as a network of queues, thereby making it amenable for rapid analysis. We propose novel algorithms that utilize results of this analysis to tune data exchange configurations (event priorities and dropping policies) while meeting situational awareness requirements and resource constraints. FireDeX leverages Software-Defined Networking (SDN) methodologies to enforce these configurations in the IoT network infrastructure. We evaluate its performance through simulated experiments in a smart building fire response scenario. Our results demonstrate significant improvement to mission-critical data delivery under a variety of conditions. Our application-aware prioritization algorithm improves the value of exchanged information by 36% when compared with no prioritization; the addition of our network-aware drop rate policies improves this performance by 42% over priorities only and by 94% over no prioritization

    PrioDeX: a Data Exchange middleware for efficient event prioritization in SDN-based IoT systems

    Get PDF
    International audienceReal-time event detection and targeted decision making for emerging mission-critical applications require systems that extract and process relevant data from IoT sources in smart spaces. Oftentimes, this data is heterogeneous in size, relevance, and urgency, which creates a challenge when considering that different groups of stakeholders (e.g., first responders, medical staff, government officials, etc) require such data to be delivered in a reliable and timely manner. Furthermore, in mission-critical settings, networks can become constrained due to lossy channels and failed components, which ultimately add to the complexity of the problem. In this paper, we propose PrioDeX, a cross-layer middleware system that enables timely and reliable delivery of mission-critical data from IoT sources to relevant consumers through the prioritization of messages. It integrates parameters at the application, network, and middleware layers into a data exchange service that accurately estimates end-to-end performance metrics through a queueing analytical model. PrioDeX proposes novel algorithms that utilize the results of this analysis to tune data exchange configurations (event priorities and dropping policies), which is necessary for satisfying situational awareness requirements and resource constraints. PrioDeX leverages Software-Defined Networking (SDN) methodologies to enforce these configurations in the IoT network infrastructure. We evaluate our approach using both simulated and prototype-based experiments in a smart building fire response scenario. Our application-aware prioritization algorithm improves the value of exchanged information by 36% when compared with no prioritization; the addition of our network-aware drop rate policies improves this performance by 42% over priorities only and by 94% over no prioritization

    Radio Resource Management for New Application Scenarios in 5G: Optimization and Deep Learning

    Get PDF
    The fifth-generation (5G) New Radio (NR) systems are expected to support a wide range of emerging applications with diverse Quality-of-Service (QoS) requirements. New application scenarios in 5G NR include enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable low-latency communications (URLLC). New wireless architectures, such as full-dimension (FD) massive multiple-input multiple-output (MIMO) and mobile edge computing (MEC) system, and new coding scheme, such as short block-length channel coding, are envisioned as enablers of QoS requirements for 5G NR applications. Resource management in these new wireless architectures is crucial in guaranteeing the QoS requirements of 5G NR systems. The traditional optimization problems, such as subcarriers and user association, are usually non-convex or Non-deterministic Polynomial-time (NP)-hard. It is time-consuming and computing-expensive to find the optimal solution, especially in a large-scale network. To solve these problems, one approach is to design a low-complexity algorithm with near optimal performance. In some cases, the low complexity algorithms are hard to obtain, deep learning can be used as an accurate approximator that maps environment parameters, such as the channel state information and traffic state, to the optimal solutions. In this thesis, we design low-complexity optimization algorithms, and deep learning frameworks in different architectures of 5G NR to resolve optimization problems subject to QoS requirements. First, we propose a low-complexity algorithm for a joint cooperative beamforming and user association problem for eMBB in 5G NR to maximize the network capacity. Next, we propose a deep learning (DL) framework to optimize user association, resource allocation, and offloading probabilities for delay-tolerant services and URLLC in 5G NR. Finally, we address the issue of time-varying traffic and network conditions on resource management in 5G NR

    Routing and Scheduling Algorithms in Resource-Limited Wireless Multi-Hop Networks

    Get PDF
    The recent advances in the area of wireless networking present novel opportunities for network operators to expand their services to infrastructure-less wireless systems.Such networks, often referred to as ad-hoc or multi-hop or peer-to-peer networks, require architectures which do not necessarily follow the cellular paradigm. They consist of entirely wireless nodes, fixed and/or mobile, that require multiple hops (and hence relaying by intermediate nodes) to transmit their messages to the desired destinations. The distinguishing features of such all-wireless network architectures give rise to new trade-offs between traditional concerns in wireless communications (such as spectral efficiency, and energy conservation) and the notions of routing, scheduling and resource allocation. The purpose of this work is to identify and study some of these novel issues, propose solutions in the context of network control and evaluate the usual network performance measures as functions of the new trade-offs.To these ends, we address first the problem of routing connection-oriented traffic with energy efficiency in all-wireless multi-hop networks. We take advantage of the flexibility of wireless nodes to transmit at different power levels and define a framework for formulating the problem of session routing from the perspective of energy expenditure. A set of heuristics are developed for determining end-to-end unicast paths with sufficient bandwidth and transceiver resources, in which nodes use local information in order to select their transmission power and bandwidth allocation. We propose a set of metrics that associate each link transmission with a cost and consider both the cases of plentiful and limited bandwidth resources, the latter jointly with a set of channel allocation algorithms. Performance is measured by call blocking probability and average consumed energy and a detailed simulation model that incorporates all the components of our algorithms has been developed and used for performance evaluation of a variety of networks.In the sequel, we propose a "blueprint" for approaching the problem of link bandwidth management in conjunction with routing, for ad-hoc wireless networks carrying packet-switched traffic. We discuss the dependencies between routing, access control and scheduling functions and propose an adaptive mechanism for solving the capacity allocation (at both the node-level and the flow-level) and the route assignment problems, that manages delays due to congestion at nodes and packet loss due to error prone wireless links, to provide improved end-to-end delay/throughput. The capacity allocations to the nodes and flows and the route assignments are iterated periodically and the adaptability of the proposed approach allows the network to respond to random channel error bursts and congestion arising from bursty and new flows

    Modelling and Delay Analysis of Intermittently Connected Roadside Communication Networks

    Get PDF
    During the past decade, consumers all over the world have been showing an incremental interest in vehicular technology. The world’s leading vehicle manufacturers have been and are still engaged in continuous competitions to present for today’s sophisticated drivers, vehicles that gratify their demands. This has lead to an outstanding advancement and development of the vehicular manufacturing industry and has primarily contributed to the augmentation of the twenty first century’s vehicle with an appealing and intelligent personality. Particularly, the marriage of information technology to the transport infrastructure gave birth to a novel communication paradigm known as Vehicular Networking. More precisely, being equipped with computerized modules and wireless communication devices, the majority of today’s vehicles qualify to act as typical mobile network nodes that are able to communicate with each other. In addition, these vehicles can as well communicate with other wireless units such as routers, access points, base stations and data posts that are arbitrarily deployed at fixed locations along roadways. These fixed units are referred to as Stationary Roadside Units (SRUs). As a result, ephemeral and self-organized networks can be formed. Such networks are known as Vehicular Networks and constitute the core of the latitudinarian Intelligent Transportation System (ITS) that embraces a wide variety of applications including but not limited to: traffic management, passenger and road safety, environment monitoring and road surveillance, hot-spot guidance, on the fly Internet access, remote region connectivity, information sharing and dissemination, peer-to-peer services and so forth. This thesis presents an in-depth investigation on the possibility of exploiting mobile vehicles to establish connectivity between isolated SRUs. A network of intercommunicating SRUs is referred to as an Intermittently Connected Roadside Communication Network (ICRCN). While inter-vehicular communication as well as vehicle-to-SRU communication has been widely studied in the open literature, the inter-SRU communication has received very little attention. In this thesis, not only do we focus on inter-SRU connectivity establishment through the transport infrastructure but also on the objective of achieving delay-minimal data delivery from a source SRU to a destination SRU in. This delivery process is highly dependent on the vehicular traffic behaviour and more precisely on the arrival times of vehicles to the source SRU as well as these vehicles’ speeds. Vehicle arrival times and speeds are, in turn, highly random and are not available a priori. Under such conditions, the realization of the delay-minimal data delivery objective becomes remarkably challenging. This is especially true since, upon the arrival of vehicles, the source SRU acts on the spur of the moment and evaluates the suitability of the arriving vehicles. Data bundles are only released to those vehicles that contribute the most to the minimization of the average bundle end-to-end delivery delays. Throughout this thesis, several schemes are developed for this purpose. These schemes differ in their enclosed vehicle selection criterion as well as the adopted bundle release mechanism. Queueing models are developed for the purpose of capturing and describing the source SRU’s behaviour as well as the contents of its buffer and the experienced average bundle queueing delay under each of theses schemes. In addition, several mathematical frameworks are established for the purpose of evaluating the average bundle transit delay. Extensive simulations are conducted to validate the developed models and mathematical analyses

    Dimensionerings- en werkverdelingsalgoritmen voor lambda grids

    Get PDF
    Grids bestaan uit een verzameling reken- en opslagelementen die geografisch verspreid kunnen zijn, maar waarvan men de gezamenlijke capaciteit wenst te benutten. Daartoe dienen deze elementen verbonden te worden met een netwerk. Vermits veel wetenschappelijke applicaties gebruik maken van een Grid, en deze applicaties doorgaans grote hoeveelheden data verwerken, is het noodzakelijk om een netwerk te voorzien dat dergelijke grote datastromen op betrouwbare wijze kan transporteren. Optische transportnetwerken lenen zich hier uitstekend toe. Grids die gebruik maken van dergelijk netwerk noemt men lambda Grids. Deze thesis beschrijft een kader waarin het ontwerp en dimensionering van optische netwerken voor lambda Grids kunnen beschreven worden. Ook wordt besproken hoe werklast kan verdeeld worden op een Grid eens die gedimensioneerd is. Een groot deel van de resultaten werd bekomen door simulatie, waarbij gebruik gemaakt wordt van een eigen Grid simulatiepakket dat precies focust op netwerk- en Gridelementen. Het ontwerp van deze simulator, en de daarbijhorende implementatiekeuzes worden dan ook uitvoerig toegelicht in dit werk
    corecore