127,952 research outputs found

    ADAPTIVE RESOURCE ALLOCATION FOR WIRELESS MULTICAST MIMO-OFDM SYSTEMS

    Get PDF
    Multiple antenna orthogonal frequency division multiple access (OFDMA) is a promissing technique for the high downlink capacity in the next generation wireless systems, in which adaptive resource allocation would be an important research issue that can significantly improve the performance with guaranteed QoS for users. Moreover, most of the current source allocation algorithms are limited to the unicast system. In this paper, dynamic resource allocation is studied for multiple antenna OFDMA based systems which provide multicast service. The performance of multicast system is simulated and compared with that of the unicast system. Numerical results also show that the propossed algorithms improve the system capacity significantly compared with the conventional scheme

    SYNCHRONIZATION AND RESOURCE ALLOCATION IN DOWNLINK OFDM SYSTEMS

    Get PDF
    The next generation (4G) wireless systems are expected to provide universal personal and multimedia communications with seamless connection and very high rate transmissions and without regard to the users’ mobility and location. OFDM technique is recognized as one of the leading candidates to provide the wireless signalling for 4G systems. The major challenges in downlink multiuser OFDM based 4G systems include the wireless channel, the synchronization and radio resource management. Thus algorithms are required to achieve accurate timing and frequency offset estimation and the efficient utilization of radio resources such as subcarrier, bit and power allocation. The objectives of the thesis are of two fields. Firstly, we presented the frequency offset estimation algorithms for OFDM systems. Building our work upon the classic single user OFDM architecture, we proposed two FFT-based frequency offset estimation algorithms with low computational complexity. The computer simulation results and comparisons show that the proposed algorithms provide smaller error variance than previous well-known algorithm. Secondly, we presented the resource allocation algorithms for OFDM systems. Building our work upon the downlink multiuser OFDM architecture, we aimed to minimize the total transmit power by exploiting the system diversity through the management of subcarrier allocation, adaptive modulation and power allocation. Particularly, we focused on the dynamic resource allocation algorithms for multiuser OFDM system and multiuser MIMO-OFDM system. For the multiuser OFDM system, we proposed a lowiv complexity channel gain difference based subcarrier allocation algorithm. For the multiuser MIMO-OFDM system, we proposed a unit-power based subcarrier allocation algorithm. These proposed algorithms are all combined with the optimal bit allocation algorithm to achieve the minimal total transmit power. The numerical results and comparisons with various conventional nonadaptive and adaptive algorithmic approaches are provided to show that the proposed resource allocation algorithms improve the system efficiencies and performance given that the Quality of Service (QoS) for each user is guaranteed. The simulation work of this project is based on hand written codes in the platform of the MATLAB R2007b

    Resource Allocation in Ad Hoc Networks

    No full text
    Unlike the centralized network, the ad hoc network does not have any central administrations and energy is constrained, e.g. battery, so the resource allocation plays a very important role in efficiently managing the limited energy in ad hoc networks. This thesis focuses on the resource allocation in ad hoc networks and aims to develop novel techniques that will improve the network performance from different network layers, such as the physical layer, Medium Access Control (MAC) layer and network layer. This thesis examines the energy utilization in High Speed Downlink Packet Access (HSDPA) systems at the physical layer. Two resource allocation techniques, known as channel adaptive HSDPA and two-group HSDPA, are developed to improve the performance of an ad hoc radio system through reducing the residual energy, which in turn, should improve the data rate in HSDPA systems. The channel adaptive HSDPA removes the constraint on the number of channels used for transmissions. The two-group allocation minimizes the residual energy in HSDPA systems and therefore enhances the physical data rates in transmissions due to adaptive modulations. These proposed approaches provide better data rate than rates achieved with the current HSDPA type of algorithm. By considering both physical transmission power and data rates for defining the cost function of the routing scheme, an energy-aware routing scheme is proposed in order to find the routing path with the least energy consumption. By focusing on the routing paths with low energy consumption, computational complexity is significantly reduced. The data rate enhancement achieved by two-group resource allocation further reduces the required amount of energy per bit for each path. With a novel load balancing technique, the information bits can be allocated to each path in such that a way the overall amount of energy consumed is minimized. After loading bits to multiple routing paths, an end-to-end delay minimization solution along a routing path is developed through studying MAC distributed coordination function (DCF) service time. Furthermore, the overhead effect and the related throughput reduction are studied. In order to enhance the network throughput at the MAC layer, two MAC DCF-based adaptive payload allocation approaches are developed through introducing Lagrange optimization and studying equal data transmission period

    Neural Network Prediction based Dynamic Resource Scheduling for Cloud System

    Get PDF
    Cloud computing is known as a internet based model for providing shared and on demand accessing of the resources (CPU, memory, processor, etc.). It is known as a dynamic service provider using very large scalable and virtualized resources over the Internet. With the help of cloud computing and virtualization technology, large number of online services can run over virtual machines (VMs), which in turn will reduce the number of physical servers. However, maintaining and managing the resources demand dynamically for these virtual machines with changing demand of resources while maintaining the service level agreement (SLA) is a challenging task for the cloud provider. Dynamic resource scheduling is a way to help manage the resource demand for virtual machines to handle variable workload without SLA violation. In this paper, we introduce Neural based prediction strategy to enable elastic scaling of resources for cloud systems. Unlike traditional static approach which do not consider the VM workload variability in account and dynamic approaches which sometimes predict under estimate of resources or over estimate of the resource, here we consider both workload fluctuations of VMs and prediction estimation problem into account. Neural based prediction strategy will first predict the VM resource demand based on Artificial Neural Network (ANN) model, to achieve resource allocation for cloud applications on each VM. Once the prediction is done, we than apply dynamic resource scheduling to consolidate the virtual machines with adaptive resource allocation, to reduce the number of active physical server while satisfying the SLA

    AQuoSA - adaptive quality of service architecture

    Get PDF
    This paper presents an architecture for quality of service (QoS) control of time-sensitive applications in multi-programmed embedded systems. In such systems, tasks must receive appropriate timeliness guarantees from the operating system independently from one another; otherwise, the QoS experienced by the users may decrease. Moreover, fluctuations in time of the workloads make a static partitioning of the central processing unit (CPU) that is neither appropriate nor convenient, whereas an adaptive allocation based on an on-line monitoring of the application behaviour leads to an optimum design. By combining a resource reservation scheduler and a feedback-based mechanism, we allow applications to meet their QoS requirements with the minimum possible impact on CPU occupation. We implemented the framework in AQuoSA (Adaptive Quality of Service Architecture (AQuoSA). http://aquosa.sourceforge.net), a software architecture that runs on top of the Linux kernel. We provide extensive experimental validation of our results and offer an evaluation of the introduced overhead, which is perfectly sustainable in the class of addressed applications

    Efficient cloud computing system operation strategies

    Get PDF
    Cloud computing systems have emerged as a new paradigm of computing systems by providing on demand based services which utilize large size computing resources. Service providers offer Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) to users depending on their demand and users pay only for the user resources. The Cloud system has become a successful business model and is expanding its scope through collaboration with various applications such as big data processing, Internet of Things (IoT), robotics, and 5G networks. Cloud computing systems are composed of large numbers of computing, network, and storage devices across the geographically distributed area and multiple tenants employ the cloud systems simultaneously with heterogeneous resource requirements. Thus, efficient operation of cloud computing systems is extremely difficult for service providers. In order to maximize service providers\u27 profit, the cloud systems should be able to serve large numbers of tenants while minimizing the OPerational EXpenditure (OPEX). For serving as many tenants as possible tenants using limited resources, the service providers should implement efficient resource allocation for users\u27 requirements. At the same time, cloud infrastructure consumes a significant amount of energy. According to recent disclosures, Google data centers consumed nearly 300 million watts and Facebook\u27s data centers consumed 60 million watts. Explosive traffic demand for data centers will keep increasing because of expansion of mobile and cloud traffic requirements. If service providers do not develop efficient ways for energy management in their infrastructures, this will cause significant power consumption in running their cloud infrastructures. In this thesis, we consider optimal datasets allocation in distributed cloud computing systems. Our objective is to minimize processing time and cost. Processing time includes virtual machine processing time, communication time, and data transfer time. In distributed Cloud systems, communication time and data transfer time are important component of processing time because data centers are distributed geographically. If we place data sets far from each other, this increases the communication and data transfer time. The cost objective includes virtual machine cost, communication cost, and data transfer cost. Cloud service providers charge for virtual machine usage according to usage time of virtual machine. Communication cost and transfer cost are charged based on transmission speed of data and data set size. The problem of allocating data sets to VMs in distributed heterogeneous clouds is formulated as a linear programming model with two objectives: the cost and processing time. After finding optimal solutions of each objective function, we use a heuristic approach to find the Pareto front of multi-objective linear programming problem. In the simulation experiment, we consider a heterogeneous cloud infrastructure with five different types of cloud service provider resource information, and we optimize data set placement by guaranteeing Pareto optimality of the solutions. Also, this thesis proposes an adaptive data center activation model that consolidates adaptive activation of switches and hosts simultaneously integrated with a statistical request prediction algorithm. The learning algorithm predicts user requests in predetermined interval by using a cyclic window learning algorithm. Then the data center activates an optimal number of switches and hosts in order to minimize power consumption that is based on prediction. We designed an adaptive data center activation model by using a cognitive cycle composed of three steps: data collection, prediction, and activation. In the request prediction step, the prediction algorithm forecasts a Poisson distribution parameter lambda in every determined interval by using Maximum Likelihood Estimation (MLE) and Local Linear Regression (LLR) methods. Then, adaptive activation of the data center is implemented with the predicted parameter in every interval. The adaptive activation model is formulated as a Mixed Integer Linear Programming (MILP) model. Switches and hosts are modeled as M/M/1 and M/M/c queues. In order to minimize power consumption of data centers, the model minimizes the number of activated switches, hosts, and memory modules while guaranteeing Quality of Service (QoS). Since the problem is NP-hard, we use the Simulated Annealing algorithm to solve the model. We employ Google cluster trace data to simulate our prediction model. Then, the predicted data is employed to test adaptive activation model and observed energy saving rate in every interval. In the experiment, we could observe that the adaptive activation model saves 30 to 50% of energy compared to the full operation state of data center in practical utilization rates of data centers. Network Function Virtualization (NFV) emerged as a game changer in network market for efficient operation of the network infrastructure. Since NFV transforms the dedicated physical devices designed for specific network function to software-based Virtual Machines (VMs), the network operators expect to reduce a significant Capital Expenditure (CAPEX) and Operational Expenditure (OPEX). Softwarized VMs can be implemented on any commodity servers, so network operators can design flexible and scalable network architecture through efficient VM placement and migration algorithms. In this thesis, we study a joint problem of Virtualized Network Function (VNF) resource allocation and NFV-Service Chain (NFV-SC) placement problem in Software Defined Network (SDN) based hyper-scale distributed cloud computing infrastructure. The objective of the problem is minimizing the power consumption of the infrastructure while enforcing Service Level Agreement (SLA) of users. We employ an M/G/1/K queuing network approximation analysis for the NFV-SC model. The communication time between VNFs is considered in the NFV-SC placement because it influences the performance of NFV-SC in the highly distributed infrastructure environment. The joint problem is modeled by a Mixed Integer Non-linear Programming (MINP) model. However, the problem is intractable in large size infrastructures due to NP-hardness of the problem. We therefore propose a heuristic algorithm which splits the problem into two sub-problems: resource allocation and the NFV-SC embedding. In the numerical analysis, we could observe that the proposed algorithm outperforms the traditional bin packing algorithms in terms of power consumption and SLA assurance. In this thesis, we propose efficient cloud infrastructure management strategies from a single data center point of view to hyper-scale distributed cloud computing infrastructure for profitable cloud system operation. The management schemes are proposed with various objectives such as Quality of Service (Qos), performance, latency, and power consumption. We use efficient mathematical modeling strategies such as Linear Programming (LP), Mixed Integer Linear Programming (MILP), Mixed Integer Non-linear Programming(MINP), convex programming, queuing theory, and probabilistic modeling strategies and prove the efficiency of the proposed strategies through various simulations

    A Self-adaptive Agent-based System for Cloud Platforms

    Full text link
    Cloud computing is a model for enabling on-demand network access to a shared pool of computing resources, that can be dynamically allocated and released with minimal effort. However, this task can be complex in highly dynamic environments with various resources to allocate for an increasing number of different users requirements. In this work, we propose a Cloud architecture based on a multi-agent system exhibiting a self-adaptive behavior to address the dynamic resource allocation. This self-adaptive system follows a MAPE-K approach to reason and act, according to QoS, Cloud service information, and propagated run-time information, to detect QoS degradation and make better resource allocation decisions. We validate our proposed Cloud architecture by simulation. Results show that it can properly allocate resources to reduce energy consumption, while satisfying the users demanded QoS

    A HPC Co-Scheduler with Reinforcement Learning

    Full text link
    Although High Performance Computing (HPC) users understand basic resource requirements such as the number of CPUs and memory limits, internal infrastructural utilization data is exclusively leveraged by cluster operators, who use it to configure batch schedulers. This task is challenging and increasingly complex due to ever larger cluster scales and heterogeneity of modern scientific workflows. As a result, HPC systems achieve low utilization with long job completion times (makespans). To tackle these challenges, we propose a co-scheduling algorithm based on an adaptive reinforcement learning algorithm, where application profiling is combined with cluster monitoring. The resulting cluster scheduler matches resource utilization to application performance in a fine-grained manner (i.e., operating system level). As opposed to nominal allocations, we apply decision trees to model applications' actual resource usage, which are used to estimate how much resource capacity from one allocation can be co-allocated to additional applications. Our algorithm learns from incorrect co-scheduling decisions and adapts from changing environment conditions, and evaluates when such changes cause resource contention that impacts quality of service metrics such as jobs slowdowns. We integrate our algorithm in an HPC resource manager that combines Slurm and Mesos for job scheduling and co-allocation, respectively. Our experimental evaluation performed in a dedicated cluster executing a mix of four real different scientific workflows demonstrates improvements on cluster utilization of up to 51% even in high load scenarios, with 55% average queue makespan reductions under low loads
    • …
    corecore