213,589 research outputs found
QoS-Driven Job Scheduling: Multi-Tier Dependency Considerations
For a cloud service provider, delivering optimal system performance while
fulfilling Quality of Service (QoS) obligations is critical for maintaining a
viably profitable business. This goal is often hard to attain given the
irregular nature of cloud computing jobs. These jobs expect high QoS on an
on-demand fashion, that is on random arrival. To optimize the response to such
client demands, cloud service providers organize the cloud computing
environment as a multi-tier architecture. Each tier executes its designated
tasks and passes the job to the next tier; in a fashion similar, but not
identical, to the traditional job-shop environments. An optimization process
must take place to schedule the appropriate tasks of the job on the resources
of the tier, so as to meet the QoS expectations of the job. Existing approaches
employ scheduling strategies that consider the performance optimization at the
individual resource level and produce optimal single-tier driven schedules. Due
to the sequential nature of the multi-tier environment, the impact of such
schedules on the performance of other resources and tiers tend to be ignored,
resulting in a less than optimal performance when measured at the multi-tier
level. In this paper, we propose a multi-tier-oriented job scheduling and
allocation technique. The scheduling and allocation process is formulated as a
problem of assigning jobs to the resource queues of the cloud computing
environment, where each resource of the environment employs a queue to hold the
jobs assigned to it. The scheduling problem is NP-hard, as such a biologically
inspired genetic algorithm is proposed. The computing resources across all
tiers of the environment are virtualized in one resource by means of a single
queue virtualization. A chromosome that mimics the sequencing and allocation of
the tasks in the proposed virtual queue is proposed
Energy Efficient Resource Allocation for Cloud Computing
The resource allocation problem in a cloud computing environment has been shown, in general, to be NP-complete, requiring the development of heuristic techniques. The complexity of the resource allocation problem increases with the size of cloud infrastructure and becomes difficult to solve effectively. The exponential solution space for the resource allocation problem can search using heuristic techniques to obtain a sub-optimal solution at the acceptable time. This thesis presents the resource allocation problem in cloud computing as a linear programming problem, with the objective to minimize energy consumed in computation. This resource allocation problem has been treated using heuristic and meta heuristic approach. All these heuristics from the literature have been selected: adapted, implemented, and analyzed under one set of common assumptions considering Expected time to compute (ETC) task model. These heuristic algorithms operate in two phases, selection of task from the task pool, followed by selection of cloud resource. A set of ten greedy heuristics for resource allocation using the greedy paradigm has been used, that operates in two stages. At each stage a particular input is selected through a selection procedure. The selection procedure can be realized using a 2-phase heuristic. In particular, we have used 'FcfsRand', 'FcfsRr','FcfsMin','FcfsMax', 'MinMin', 'MedianMin', 'MaxMin', 'MinMax', 'MedianMax', and 'MaxMax'. The simulation results indicate in the favor of MaxMax. The novel genetic algorithm framework has been proposed for task scheduling to minimize the energy consumption in cloud computing infrastructure. The performance of the proposed GA resource allocation strategy has been compared Random and Round Robin scheduling using in house simulator. The experimental results show that the GA based scheduling model outperforms the existing Rondom and Round Robin scheduling models
Branching Processes: Optimization, Variational Characterization, and Continuous Approximation
In this thesis, we use multitype Galton-Watson branching processes in random environments as individual-based models for the evolution of structured populations with both demographic stochasticity and environmental stochasticity, and investigate the phenotype allocation
problem. We explore a variational characterization for the stochastic evolution of a structured population modeled by a multitype Galton-Watson branching process. When the population under consideration is large and the time scale is fast, we deduce the continuous approximation for multitype Markov branching processes in random environments.
Many problems in evolutionary biology involve the allocation of some limited resource among several investments. It is often of interest to know whether, and how, allocation strategies can be optimized for the evolution of a structured population with randomness. In our
work, the investments represent different types of offspring, or alternative strategies for allocations to offspring. As payoffs we consider the long-term growth rate, the expected number
of descendants with some future discount factor, the extinction probability of the lineage, or the expected survival time. Two different kinds of population randomness are considered: demographic stochasticity and environmental stochasticity. In chapter 2, we solve the allocation problem w.r.t. the above payoff functions in three stochastic population models depending on different kinds of population randomness.
Evolution is often understood as an optimization problem, and there is a long tradition to look at evolutionary models from a variational perspective. In chapter 3, we deduce a variational characterization for the stochastic evolution of a structured population modeled by a
multitype Galton-Watson branching process. In particular, the so-called retrospective process plays an important role in the description of the equilibrium state used in the variational characterization. We define the retrospective process associated with a multitype Galton-Watson
branching process and identify it with the mutation process describing the type evolution along typical lineages of the multitype Galton-Watson branching process.
Continuous approximation of branching processes is of both practical and theoretical interest. However, to our knowledge, there is no literature on approximation of multitype branching processes in random environments. In chapter 4, we firstly construct a multitype Markov
branching process in a random environment. When conditioned on the random environment, we deduce the Kolmogorov equations and the mean matrix for the conditioned branching process. Then we introduce a parallel mutation-selection Markov branching process in a random
environment and analyze its instability property. Finally, we deduce a weak convergence result for a sequence of the parallel Markov branching processes in random environments and give
examples for applications
Dynamic resource allocation with integrated reinforcement learning for a D2D-enabled LTE-A network with access to unlicensed bandt
We propose a dynamic resource allocation algorithm for device-To-device (D2D) communication underlying a Long Term Evolution Advanced (LTE-A) network with reinforcement learning (RL) applied for unlicensed channel allocation. In a considered system, the inband and outband resources are assigned by the LTE evolved NodeB (eNB) to different device pairs to maximize the network utility subject to the target signal-To-interference-And-noise ratio (SINR) constraints. Because of the absence of an established control link between the unlicensed and cellular radio interfaces, the eNB cannot acquire any information about the quality and availability of unlicensed channels. As a result, a considered problem becomes a stochastic optimization problem that can be dealt with by deploying a learning theory (to estimate the random unlicensed channel environment). Consequently, we formulate the outband D2D access as a dynamic single-player game in which the player (eNB) estimates its possible strategy and expected utility for all of its actions based only on its own local observations using a joint utility and strategy estimation based reinforcement learning (JUSTE-RL) with regret algorithm. A proposed approach for resource allocation demonstrates near-optimal performance after a small number of RL iterations and surpasses the other comparable methods in terms of energy efficiency and throughput maximization
Deep Reinforcement Learning for Resource Allocation in V2V Communications
In this article, we develop a decentralized resource allocation mechanism for
vehicle-to-vehicle (V2V) communication systems based on deep reinforcement
learning. Each V2V link is considered as an agent, making its own decisions to
find optimal sub-band and power level for transmission. Since the proposed
method is decentralized, the global information is not required for each agent
to make its decisions, hence the transmission overhead is small. From the
simulation results, each agent can learn how to satisfy the V2V constraints
while minimizing the interference to vehicle-to-infrastructure (V2I)
communications
Spectral Efficiency of Multi-User Adaptive Cognitive Radio Networks
In this correspondence, the comprehensive problem of joint power, rate, and
subcarrier allocation have been investigated for enhancing the spectral
efficiency of multi-user orthogonal frequency-division multiple access (OFDMA)
cognitive radio (CR) networks subject to satisfying total average transmission
power and aggregate interference constraints. We propose novel optimal radio
resource allocation (RRA) algorithms under different scenarios with
deterministic and probabilistic interference violation limits based on a
perfect and imperfect availability of cross-link channel state information
(CSI). In particular, we propose a probabilistic approach to mitigate the total
imposed interference on the primary service under imperfect cross-link CSI. A
closed-form mathematical formulation of the cumulative density function (cdf)
for the received signal-to-interference-plus-noise ratio (SINR) is formulated
to evaluate the resultant average spectral efficiency (ASE). Dual decomposition
is utilized to obtain sub-optimal solutions for the non-convex optimization
problems. Through simulation results, we investigate the achievable performance
and the impact of parameters uncertainty on the overall system performance.
Furthermore, we present that the developed RRA algorithms can considerably
improve the cognitive performance whilst abide the imposed power constraints.
In particular, the performance under imperfect cross-link CSI knowledge for the
proposed `probabilistic case' is compared to the conventional scenarios to show
the potential gain in employing this scheme
A Game-Theoretic Approach to Energy-Efficient Resource Allocation in Device-to-Device Underlay Communications
Despite the numerous benefits brought by Device-to-Device (D2D)
communications, the introduction of D2D into cellular networks poses many new
challenges in the resource allocation design due to the co-channel interference
caused by spectrum reuse and limited battery life of User Equipments (UEs).
Most of the previous studies mainly focus on how to maximize the Spectral
Efficiency (SE) and ignore the energy consumption of UEs. In this paper, we
study how to maximize each UE's Energy Efficiency (EE) in an
interference-limited environment subject to its specific Quality of Service
(QoS) and maximum transmission power constraints. We model the resource
allocation problem as a noncooperative game, in which each player is
self-interested and wants to maximize its own EE. A distributed
interference-aware energy-efficient resource allocation algorithm is proposed
by exploiting the properties of the nonlinear fractional programming. We prove
that the optimum solution obtained by the proposed algorithm is the Nash
equilibrium of the noncooperative game. We also analyze the tradeoff between EE
and SE and derive closed-form expressions for EE and SE gaps.Comment: submitted to IET Communications. arXiv admin note: substantial text
overlap with arXiv:1405.1963, arXiv:1407.155
Reinforcement Learning Scheduler for Vehicle-to-Vehicle Communications Outside Coverage
Radio resources in vehicle-to-vehicle (V2V) communication can be scheduled
either by a centralized scheduler residing in the network (e.g., a base station
in case of cellular systems) or a distributed scheduler, where the resources
are autonomously selected by the vehicles. The former approach yields a
considerably higher resource utilization in case the network coverage is
uninterrupted. However, in case of intermittent or out-of-coverage, due to not
having input from centralized scheduler, vehicles need to revert to distributed
scheduling. Motivated by recent advances in reinforcement learning (RL), we
investigate whether a centralized learning scheduler can be taught to
efficiently pre-assign the resources to vehicles for out-of-coverage V2V
communication. Specifically, we use the actor-critic RL algorithm to train the
centralized scheduler to provide non-interfering resources to vehicles before
they enter the out-of-coverage area. Our initial results show that a RL-based
scheduler can achieve performance as good as or better than the state-of-art
distributed scheduler, often outperforming it. Furthermore, the learning
process completes within a reasonable time (ranging from a few hundred to a few
thousand epochs), thus making the RL-based scheduler a promising solution for
V2V communications with intermittent network coverage.Comment: Article published in IEEE VNC 201
- …