33 research outputs found

    Delay Optimal Server Assignment to Symmetric Parallel Queues with Random Connectivities

    Full text link
    In this paper, we investigate the problem of assignment of KK identical servers to a set of NN parallel queues in a time slotted queueing system. The connectivity of each queue to each server is randomly changing with time; each server can serve at most one queue and each queue can be served by at most one server per time slot. Such queueing systems were widely applied in modeling the scheduling (or resource allocation) problem in wireless networks. It has been previously proven that Maximum Weighted Matching (MWM) is a throughput optimal server assignment policy for such queueing systems. In this paper, we prove that for a symmetric system with i.i.d. Bernoulli packet arrivals and connectivities, MWM minimizes, in stochastic ordering sense, a broad range of cost functions of the queue lengths including total queue occupancy (or equivalently average queueing delay).Comment: 6 pages, 4 figures, Proc. IEEE CDC-ECC 201

    Radio Resource Management for New Application Scenarios in 5G: Optimization and Deep Learning

    Get PDF
    The fifth-generation (5G) New Radio (NR) systems are expected to support a wide range of emerging applications with diverse Quality-of-Service (QoS) requirements. New application scenarios in 5G NR include enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable low-latency communications (URLLC). New wireless architectures, such as full-dimension (FD) massive multiple-input multiple-output (MIMO) and mobile edge computing (MEC) system, and new coding scheme, such as short block-length channel coding, are envisioned as enablers of QoS requirements for 5G NR applications. Resource management in these new wireless architectures is crucial in guaranteeing the QoS requirements of 5G NR systems. The traditional optimization problems, such as subcarriers and user association, are usually non-convex or Non-deterministic Polynomial-time (NP)-hard. It is time-consuming and computing-expensive to find the optimal solution, especially in a large-scale network. To solve these problems, one approach is to design a low-complexity algorithm with near optimal performance. In some cases, the low complexity algorithms are hard to obtain, deep learning can be used as an accurate approximator that maps environment parameters, such as the channel state information and traffic state, to the optimal solutions. In this thesis, we design low-complexity optimization algorithms, and deep learning frameworks in different architectures of 5G NR to resolve optimization problems subject to QoS requirements. First, we propose a low-complexity algorithm for a joint cooperative beamforming and user association problem for eMBB in 5G NR to maximize the network capacity. Next, we propose a deep learning (DL) framework to optimize user association, resource allocation, and offloading probabilities for delay-tolerant services and URLLC in 5G NR. Finally, we address the issue of time-varying traffic and network conditions on resource management in 5G NR

    On the Intersection of Communication and Machine Learning

    Get PDF
    The intersection of communication and machine learning is attracting increasing interest from both communities. On the one hand, the development of modern communication system brings large amount of data and high performance requirement, which challenges the classic analytical-derivation based study philosophy and encourages the researchers to explore the data driven method, such as machine learning, to solve the problems with high complexity and large scale. On the other hand, the usage of distributed machine learning introduces the communication cost as one of the basic considerations for the design of machine learning algorithm and system.In this thesis, we first explore the application of machine learning on one of the classic problems in wireless network, resource allocation, for heterogeneous millimeter wave networks when the environment is with high dynamics. We address the practical concerns by providing the efficient online and distributed framework. In the second part, some sampling based communication-efficient distributed learning algorithm is proposed. We utilize the trade-off between the local computation and the total communication cost and propose the algorithm with good theoretical bound. In more detail, this thesis makes the following contributionsWe introduced an reinforcement learning framework to solve the resource allocation problems in heterogeneous millimeter wave network. The large state/action space is decomposed according to the topology of the network and solved by an efficient distribtued message passing algorithm. We further speed up the inference process by an online updating process.We proposed the distributed coreset based boosting framework. An efficient coreset construction algorithm is proposed based on the prior knowledge provided by clustering. Then the coreset is integrated with boosting with improved convergence rate. We extend the proposed boosting framework to the distributed setting, where the communication cost is reduced by the good approximation of coreset.We propose an selective sampling framework to construct a subset of sample that could effectively represent the model space. Based on the prior distribution of the model space or the large amount of samples from model space, we derive a computational efficient method to construct such subset by minimizing the error of classifying a classifier

    Java message passing interface.

    Get PDF
    by Wan Lai Man.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 76-80).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- Objectives --- p.3Chapter 1.3 --- Contributions --- p.4Chapter 1.4 --- Overview --- p.4Chapter 2 --- Literature Review --- p.6Chapter 2.1 --- Message Passing Interface --- p.6Chapter 2.1.1 --- Point-to-Point Communication --- p.7Chapter 2.1.2 --- Persistent Communication Request --- p.8Chapter 2.1.3 --- Collective Communication --- p.8Chapter 2.1.4 --- Derived Datatype --- p.9Chapter 2.2 --- Communications in Java --- p.10Chapter 2.2.1 --- Object Serialization --- p.10Chapter 2.2.2 --- Remote Method Invocation --- p.11Chapter 2.3 --- Performances Issues in Java --- p.11Chapter 2.3.1 --- Byte-code Interpreter --- p.11Chapter 2.3.2 --- Just-in-time Compiler --- p.12Chapter 2.3.3 --- HotSpot --- p.13Chapter 2.4 --- Parallel Computing in Java --- p.14Chapter 2.4.1 --- JavaMPI --- p.15Chapter 2.4.2 --- Bayanihan --- p.15Chapter 2.4.3 --- JPVM --- p.15Chapter 3 --- Infrastructure --- p.17Chapter 3.1 --- Layered Model --- p.17Chapter 3.2 --- Java Parallel Environment --- p.19Chapter 3.2.1 --- Job Coordinator --- p.20Chapter 3.2.2 --- HostApplet --- p.20Chapter 3.2.3 --- Formation of Java Parallel Environment --- p.21Chapter 3.2.4 --- Spawning Processes --- p.24Chapter 3.2.5 --- Message-passing Mechanism --- p.28Chapter 3.3 --- Application Programming Interface --- p.28Chapter 3.3.1 --- Message Routing --- p.29Chapter 3.3.2 --- Language Binding for MPI in Java --- p.31Chapter 4 --- Programming in JMPI --- p.35Chapter 4.1 --- JMPI Package --- p.35Chapter 4.2 --- Application Startup Procedure --- p.37Chapter 4.2.1 --- MPI --- p.38Chapter 4.2.2 --- JMPI --- p.38Chapter 4.3 --- Example --- p.39Chapter 5 --- Processes Management --- p.42Chapter 5.1 --- Background --- p.42Chapter 5.2 --- Scheduler Model --- p.43Chapter 5.3 --- Load Estimation --- p.45Chapter 5.3.1 --- Cost Ratios --- p.47Chapter 5.4 --- Task Distribution --- p.49Chapter 6 --- Performance Evaluation --- p.51Chapter 6.1 --- Testing Environment --- p.51Chapter 6.2 --- Latency from Java --- p.52Chapter 6.2.1 --- Benchmarking --- p.52Chapter 6.2.2 --- Experimental Results in Computation Costs --- p.52Chapter 6.2.3 --- Experimental Results in Communication Costs --- p.55Chapter 6.3 --- Latency from JMPI --- p.56Chapter 6.3.1 --- Benchmarking --- p.56Chapter 6.3.2 --- Experimental Results --- p.58Chapter 6.4 --- Application Granularity --- p.62Chapter 6.5 --- Scheduling Enable --- p.64Chapter 7 --- Conclusion --- p.66Chapter 7.1 --- Summary of the thesis --- p.66Chapter 7.2 --- Future work --- p.67Chapter A --- Performance Metrics and Benchmark --- p.69Chapter A.1 --- Model and Metrics --- p.69Chapter A.1.1 --- Measurement Model --- p.69Chapter A.1.2 --- Performance Metrics --- p.70Chapter A.1.3 --- Communication Parameters --- p.72Chapter A.2 --- Benchmarking --- p.73Chapter A.2.1 --- Ping --- p.73Chapter A.2.2 --- PingPong --- p.74Chapter A.2.3 --- Collective --- p.74Bibliography --- p.7

    Proceedings of the Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015) Krakow, Poland

    Get PDF
    Proceedings of: Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015). Krakow (Poland), September 10-11, 2015

    Design issues in quality of service routing

    Get PDF
    The range of applications and services which can be successfully deployed in packet-switched networks such as the Internet is limited when the network does nor provide Quality of Service (QoS). This is the typical situation in today's Internet. A key aspect in providing QoS support is the requirement for an optimised and intelligent mapping of customer traffic flows onto a physical network topology. The problem of selecting such paths is the task of QoS routing QoS routing algorithms are intrinsically complex and need careful study before being implemented in real networks. Our aim is to address some of the challenges present m the deployment of QoS routing methods. This thesis considers a number of practical limitations of existing QoS routing algorithms and presents solutions to the problems identified. Many QoS routing algorithms are inherently unstable and induce traffic fluctuations in the network. We describe two new routing algorithms which address this problem The first method - ALCFRA (Adaptive Link Cost Function Routing Algorithm) - can be used in networks with sparse connectivity, while the second algorithm - CAR (Connectivity Aware Routing) - is designed to work well in other network topologies. We also describe how to ensure co-operative interaction of the routing algorithms in multiple domains when hierarchial routing is used and also present a solution to the problems of how to provide QoS support m a network where not all nodes are QoS-aware. Our solutions are supported by extensive simulations over a wide range of network topologies and their performance is compared to existing algorithms. It is shown that our solutions advance the state of the art in QoS routing and facilitate the deployment of QoS support in tomorrow's Internet

    Random Neural Networks and Optimisation

    Get PDF
    In this thesis we introduce new models and learning algorithms for the Random Neural Network (RNN), and we develop RNN-based and other approaches for the solution of emergency management optimisation problems. With respect to RNN developments, two novel supervised learning algorithms are proposed. The first, is a gradient descent algorithm for an RNN extension model that we have introduced, the RNN with synchronised interactions (RNNSI), which was inspired from the synchronised firing activity observed in brain neural circuits. The second algorithm is based on modelling the signal-flow equations in RNN as a nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory quasi-Newton algorithm specifically designed for the RNN case. Regarding the investigation of emergency management optimisation problems, we examine combinatorial assignment problems that require fast, distributed and close to optimal solution, under information uncertainty. We consider three different problems with the above characteristics associated with the assignment of emergency units to incidents with injured civilians (AEUI), the assignment of assets to tasks under execution uncertainty (ATAU), and the deployment of a robotic network to establish communication with trapped civilians (DRNCTC). AEUI is solved by training an RNN tool with instances of the optimisation problem and then using the trained RNN for decision making; training is achieved using the developed learning algorithms. For the solution of ATAU problem, we introduce two different approaches. The first is based on mapping parameters of the optimisation problem to RNN parameters, and the second on solving a sequence of minimum cost flow problems on appropriately constructed networks with estimated arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer linear programming formulation, which is based on network flows. Finally, we design and implement distributed heuristic algorithms for the deployment of robots when the civilian locations are known or uncertain

    Predictable execution of scientific workflows using advance resource reservations

    Get PDF
    Scientific Workflows are long-running and data intensive, and may encompass operations provided by multiple physically distributed service providers. The traditional approach to execute such workflows is to employ a single workflow engine which orchestrates the entire execution of a workflow instance, while being mostly agnostic about the state of the infrastructure it operates in (e.g., host or network load). Therefore, such centralized best-effort execution may use resources inefficiently -- for instance, repeatedly shipping large data volumes over slow network connections -- and cannot provide Quality of Service (QoS) guarantees. In particular, independent parallel executions might cause an overload of some resources, resulting in a performance degradation affecting all involved parties. In order to provide predictable behavior, we propose an approach where resources are managed proactively (i.e., reserved before being used), and where workflow execution is handled by multiple distributed and cooperating workflow engines. This allows to efficiently use the existing resources (for instance, using the most suitable provider for operations, and considering network locality for large data transfers) without overloading them, while at the same time providing predictability -- in terms of resource usage, execution timing, and cost -- for both service providers and customers. The contributions of this thesis are as follows. First, we present a system model which defines the concepts and operations required to formally represent a system where service providers are aware of the resource requirements of the operations they make available, and where (planned) workflow executions are adapted to the state of the infrastructure. Second, we describe our prototypical implementation of such a system, where a workflow execution comprises two main phases. In the planning phase, the resources to reserve for an upcoming workflow execution must be determined; this is realized using a Genetic Algorithm. We present conceptual and implementation details of the chromosome layout, and the fitness functions employed to plan executions according to one or more user-defined optimization goals. During the execution phase, the system must ensure that the actual resource usages abide to the reservations made. We present details on how such enforcement can be performed for various resource types. Third, we describe how these parts work together, and how the entire prototype system is deployed on an infrastructure based on WSDL/SOAP Web Services, UDDI Registries, and Glassfish Application Servers. Finally, we discuss the results of various evaluations, encompassing both the planning and runtime enforcement
    corecore