147 research outputs found
NUM-Based Rate Allocation for Streaming Traffic via Sequential Convex Programming
In recent years, there has been an increasing demand for ubiquitous streaming
like applications in data networks. In this paper, we concentrate on NUM-based
rate allocation for streaming applications with the so-called S-curve utility
functions. Due to non-concavity of such utility functions, the underlying NUM
problem would be non-convex for which dual methods might become quite useless.
To tackle the non-convex problem, using elementary techniques we make the
utility of the network concave, however this results in reverse-convex
constraints which make the problem non-convex. To deal with such a transformed
NUM, we leverage Sequential Convex Programming (SCP) approach to approximate
the non-convex problem by a series of convex ones. Based on this approach, we
propose a distributed rate allocation algorithm and demonstrate that under mild
conditions, it converges to a locally optimal solution of the original NUM.
Numerical results validate the effectiveness, in terms of tractable convergence
of the proposed rate allocation algorithm.Comment: 6 pages, conference submissio
Non-convex resource allocation in communication networks
The continuously growing number of applications competing for resources
in current communication networks highlights the necessity for efficient resource allocation mechanisms to maximize user satisfaction. Optimization
Theory can provide the necessary tools to develop such mechanisms that will
allocate network resources optimally and fairly among users. However, the
resource allocation problem in current networks has characteristics that turn
the respective optimization problem into a non-convex one. First, current
networks very often consist of a number of wireless links, whose capacity is
not constant but follows Shannon capacity formula, which is a non-convex
function. Second, the majority of the traffic in current networks is generated
by multimedia applications, which are non-concave functions of rate. Third,
current resource allocation methods follow the (bandwidth) proportional
fairness policy, which when applied to networks shared by both concave
and non-concave utilities leads to unfair resource allocations. These characteristics make current convex optimization frameworks inefficient in several
aspects. This work aims to develop a non-convex optimization framework
that will be able to allocate resources efficiently for non-convex resource allocation formulations. Towards this goal, a necessary and sufficient condition
for the convergence of any primal-dual optimization algorithm to the optimal solution is proven. The wide applicability of this condition makes this a fundamental contribution for Optimization Theory in general. A number
of optimization formulations are proposed, cases where this condition is not
met are analysed and efficient alternative heuristics are provided to handle
these cases. Furthermore, a novel multi-sigmoidal utility shape is proposed
to model user satisfaction for multi-tiered multimedia applications more accurately. The advantages of such non-convex utilities and their effect in the
optimization process are thoroughly examined. Alternative allocation policies are also investigated with respect to their ability to allocate resources
fairly and deal with the non-convexity of the resource allocation problem. Specifically, the advantages of using Utility Proportional Fairness as an allocation policy are examined with respect to the development of distributed
algorithms, their convergence to the optimal solution and their ability to
adapt to the Quality of Service requirements of each application
Layering as Optimization Decomposition: Questions and Answers
Network protocols in layered architectures have historically been obtained on an ad-hoc basis, and much of the recent cross-layer designs are conducted through piecemeal approaches. Network protocols may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems in the form of generalized Network Utility Maximization (NUM), providing insight on what they optimize and on the structures of network protocol stacks. In the form of 10 Questions and Answers, this paper presents a short survey of the recent efforts towards a systematic understanding of "layering" as "optimization decomposition". The overall communication network is modeled by a generalized NUM problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. Furthermore, there are many alternative decompositions, each leading to a different layering architecture. Industry adoption of this unifying framework has also started. Here we summarize the current status of horizontal decomposition into distributed computation and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and coding. We also discuss under-explored future research directions in this area. More importantly than proposing any particular crosslayer design, this framework is working towards a mathematical foundation of network architectures and the design process of modularization
Recommended from our members
Joint rate control and scheduling for providing bounded delay with high efficiency in multihop wireless networks
This thesis considers the problem of supporting traffic with elastic bandwidth requirements and hard end-to-end delay constraints in multi-hop wireless networks, with focus on source transmission rates and link data rates as the key resource allocation decisions. Specifically, the research objective is to develop a source rate control and scheduling strategy that guarantees bounded average end-to-end queueing delay and maximises the overall utility of all incoming traffic, using network utility maximisation framework. The network utility maximisation based approaches to support delay-sensitive traffic have been predominantly based on either reducing link utilisation, or approximation of links as M/D/1 queues. Both approaches lead to unpredictable transient behaviour of packet delays, and inefficient link utilisation under optimal resource allocation. On the contrary, in this thesis an approach is proposed where instead of hard delay constraints based on inaccurate M/D/1 delay estimates, traffic end-to-end delay requirements are guaranteed by proper forms of concave and increasing utility functions of their transmission rates. Specifically, an alternative formulation is presented where the delay constraint is omitted and sources’ utility functions are multiplied by a weight factor. The alternative optimisation problem is solved by a distributed scheduling algorithm incorporating a duality-based rate control algorithm at its inner layer, where optimal link prices correlate with their average queueing delays. The proposed approach is then realised by a scheduling algorithm that runs jointly with an integral controller whereby each source regulates the queueing delay on its paths at the desired level, using its utility weight coefficient as the control variable. Since the proposed algorithms are based on solving the alternative concave optimisation problem, they are simple, distributed and lead to maximal link utilisation. Hence, they avoid the limitations of the previous approaches. The proposed algorithms are shown, using both theoretical analysis and simulation, to achieve asymptotic regulation of end-to-end delay given the step size of the proposed integral controller is within a specified range
Polymorphic computing abstraction for heterogeneous architectures
Integration of multiple computing paradigms onto system on chip (SoC) has pushed the boundaries of design space exploration for hardware architectures and computing system software stack. The heterogeneity of computing styles in SoC has created a new class of architectures referred to as Heterogeneous Architectures. Novel applications developed to exploit the different computing styles are user centric for embedded SoC. Software and hardware designers are faced with several challenges to harness the full potential of heterogeneous architectures. Applications have to execute on more than one compute style to increase overall SoC resource utilization. The implication of such an abstraction is that application threads need to be polymorphic. Operating system layer is thus faced with the problem of scheduling polymorphic threads. Resource allocation is also an important problem to be dealt by the OS. Morphism evolution of application threads is constrained by the availability of heterogeneous computing resources. Traditional design optimization goals such as computational power and lower energy per computation are inadequate to satisfy user centric application resource needs. Resource allocation decisions at application layer need to permeate to the architectural layer to avoid conflicting demands which may affect energy-delay characteristics of application threads. We propose Polymorphic computing abstraction as a unified computing model for heterogeneous architectures to address the above issues. Simulation environment for polymorphic applications is developed and evaluated under various scheduling strategies to determine the effectiveness of polymorphism abstraction on resource allocation. User satisfaction model is also developed to complement polymorphism and used for optimization of resource utilization at application and network layer of embedded systems
Bandwidth Allocation Mechanism based on Users' Web Usage Patterns for Campus Networks
Managing the bandwidth in campus networks becomes a challenge in recent years. The limited bandwidth resource and continuous growth of users make the IT managers think on the strategies concerning bandwidth allocation. This paper introduces a mechanism for allocating bandwidth based on the users’ web usage patterns. The main purpose is to set a higher bandwidth to the users who are inclined to browsing educational websites compared to those who are not. In attaining this proposed technique, some stages need to be done. These are the preprocessing of the weblogs, class labeling of the dataset, computation of the feature subspaces, training for the development of the ANN for LDA/GSVD algorithm, visualization, and bandwidth allocation. The proposed method was applied to real weblogs from university’s proxy servers. The results indicate that the proposed method is useful in classifying those users who used the internet in an educational way and those who are not. Thus, the developed ANN for LDA/GSVD algorithm outperformed the existing algorithm up to 50% which indicates that this approach is efficient. Further, based on the results, few users browsed educational contents. Through this mechanism, users will be encouraged to use the internet for educational purposes. Moreover, IT managers can make better plans to optimize the distribution of bandwidth
DELMU: A Deep Learning Approach to Maximising the Utility of Virtualised Millimetre-Wave Backhauls
Advances in network programmability enable operators to 'slice' the physical
infrastructure into independent logical networks. By this approach, each
network slice aims to accommodate the demands of increasingly diverse services.
However, precise allocation of resources to slices across future 5G
millimetre-wave backhaul networks, to optimise the total network utility, is
challenging. This is because the performance of different services often
depends on conflicting requirements, including bandwidth, sensitivity to delay,
or the monetary value of the traffic incurred. In this paper, we put forward a
general rate utility framework for slicing mm-wave backhaul links, encompassing
all known types of service utilities, i.e. logarithmic, sigmoid, polynomial,
and linear. We then introduce DELMU, a deep learning solution that tackles the
complexity of optimising non-convex objective functions built upon arbitrary
combinations of such utilities. Specifically, by employing a stack of
convolutional blocks, DELMU can learn correlations between traffic demands and
achievable optimal rate assignments. We further regulate the inferences made by
the neural network through a simple 'sanity check' routine, which guarantees
both flow rate admissibility within the network's capacity region and minimum
service levels. The proposed method can be trained within minutes, following
which it computes rate allocations that match those obtained with
state-of-the-art global optimisation algorithms, yet orders of magnitude
faster. This confirms the applicability of DELMU to highly dynamic traffic
regimes and we demonstrate up to 62% network utility gains over a baseline
greedy approach.Comment: remove LaTeX remains in abstract; change the font for acrony
Performance evaluation of synergic operation of algorithms enabling opportunistic networks - D4.3
Deliverable D4.3 del projecte OneFITPreprin
- …