2,574 research outputs found

    Resource Management in Cloud Computing: Classification and Taxonomy

    Full text link
    Cloud Computing is a new era of remote computing / Internet based computing where one can access their personal resources easily from any computer through Internet. Cloud delivers computing as a utility as it is available to the cloud consumers on demand. It is a simple pay-per-use consumer-provider service model. It contains large number of shared resources. So Resource Management is always a major issue in cloud computing like any other computing paradigm. Due to the availability of finite resources it is very challenging for cloud providers to provide all the requested resources. From the cloud providers perspective cloud resources must be allocated in a fair and efficient manner. Research Survey is not available from the perspective of resource management as a process in cloud computing. So this research paper provides a detailed sequential view / steps on resource management in cloud computing. Firstly this research paper classifies various resources in cloud computing. It also gives taxonomy on resource management in cloud computing through which one can do further research. Lastly comparisons on various resource management algorithms has been presented

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    GPU PaaS Computation Model in Aneka Cloud Computing Environment

    Full text link
    Due to the surge in the volume of data generated and rapid advancement in Artificial Intelligence (AI) techniques like machine learning and deep learning, the existing traditional computing models have become inadequate to process an enormous volume of data and the complex application logic for extracting intrinsic information. Computing accelerators such as Graphics processing units (GPUs) have become de facto SIMD computing system for many big data and machine learning applications. On the other hand, the traditional computing model has gradually switched from conventional ownership-based computing to subscription-based cloud computing model. However, the lack of programming models and frameworks to develop cloud-native applications in a seamless manner to utilize both CPU and GPU resources in the cloud has become a bottleneck for rapid application development. To support this application demand for simultaneous heterogeneous resource usage, programming models and new frameworks are needed to manage the underlying resources effectively. Aneka is emerged as a popular PaaS computing model for the development of Cloud applications using multiple programming models like Thread, Task, and MapReduce in a single container .NET platform. Since, Aneka addresses MIMD application development that uses CPU based resources and GPU programming like CUDA is designed for SIMD application development, here, the chapter discusses GPU PaaS computing model for Aneka Clouds for rapid cloud application development for .NET platforms. The popular opensource GPU libraries are utilized and integrated it into the existing Aneka task programming model. The scheduling policies are extended that automatically identify GPU machines and schedule respective tasks accordingly. A case study on image processing is discussed to demonstrate the system, which has been built using PaaS Aneka SDKs and CUDA library.Comment: Submitted as book chapter, under processing, 32 page

    Fog Computing: A Taxonomy, Survey and Future Directions

    Full text link
    In recent years, the number of Internet of Things (IoT) devices/sensors has increased to a great extent. To support the computational demand of real-time latency-sensitive applications of largely geo-distributed IoT devices/sensors, a new computing paradigm named "Fog computing" has been introduced. Generally, Fog computing resides closer to the IoT devices/sensors and extends the Cloud-based computing, storage and networking facilities. In this chapter, we comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/ sensors and Cloud datacentres and review the current developments in this field. We present a taxonomy of Fog computing according to the identified challenges and its key features.We also map the existing works to the taxonomy in order to identify current research gaps in the area of Fog computing. Moreover, based on the observations, we propose future directions for research

    Machine-to-Machine (M2M) Communications in Virtualized Cellular Networks with MEC

    Full text link
    As an important part of the Internet-of-Things (IoT), machine-to-machine (M2M) communications have attracted great attention. In this paper, we introduce mobile edge computing (MEC) into virtualized cellular networks with M2M communications, to decrease the energy consumption and optimize the computing resource allocation as well as improve computing capability. Moreover, based on different functions and quality of service (QoS) requirements, the physical network can be virtualized into several virtual networks, and then each MTCD selects the corresponding virtual network to access. Meanwhile, the random access process of MTCDs is formulated as a partially observable Markov decision process (POMDP) to minimize the system cost, which consists of both the energy consumption and execution time of computing tasks. Furthermore, to facilitate the network architecture integration, software-defined networking (SDN) is introduced to deal with the diverse protocols and standards in the networks. Extensive simulation results with different system parameters reveal that the proposed scheme could significantly improve the system performance compared to the existing schemes

    VIRTUALIZED BASEBAND UNITS CONSOLIDATION IN ADVANCED LTE NETWORKS USING MOBILITY- AND POWER-AWARE ALGORITHMS

    Get PDF
    Virtualization of baseband units in Advanced Long-Term Evolution networks and a rapid performance growth of general purpose processors naturally raise the interest in resource multiplexing. The concept of resource sharing and management between virtualized instances is not new and extensively used in data centers. We adopt some of the resource management techniques to organize virtualized baseband units on a pool of hosts and investigate the behavior of the system in order to identify features which are particularly relevant to mobile environment. Subsequently, we introduce our own resource management algorithm specifically targeted to address some of the peculiarities identified by experimental results

    VUPIC: Virtual Machine Usage Based Placement in IaaS Cloud

    Full text link
    Efficient resource allocation is one of the critical performance challenges in an Infrastructure as a Service (IaaS) cloud. Virtual machine (VM) placement and migration decision making methods are integral parts of these resource allocation mechanisms. We present a novel virtual machine placement algorithm which takes performance isolation amongst VMs and their continuous resource usage into account while taking placement decisions. Performance isolation is a form of resource contention between virtual machines interested in basic low level hardware resources (CPU, memory, storage, and networks bandwidth). Resource contention amongst multiple co-hosted neighbouring VMs form the basis of the presented novel approach. Experiments are conducted to show the various categories of applications and effect of performance isolation and resource contention amongst them. A per-VM 3-dimensional Resource Utilization Vector (RUV) has been continuously calculated and used for placement decisions while taking conflicting resource interests of VMs into account. Experiments using the novel placement algorithm: VUPIC, show effective improvements in VM performance as well as overall resource utilization of the cloud.Comment: 9 Pages, 7 figure

    Dynamic resource management in Cloud datacenters for Server consolidation

    Full text link
    Cloud resource management has been a key factor for the cloud datacenters development. Many cloud datacenters have problems in understanding and implementing the techniques to manage, allocate and migrate the resources in their premises. The consequences of improper resource management may result into underutilized and wastage of resources which may also result into poor service delivery in these datacenters. Resources like, CPU, memory, Hard disk and servers need to be well identified and managed. In this Paper, Dynamic Resource Management Algorithm(DRMA) shall limit itself in the management of CPU and memory as the resources in cloud datacenters. The target is to save those resources which may be underutilized at a particular period of time. It can be achieved through Implementation of suitable algorithms. Here, Bin packing algorithm can be used whereby the best fit algorithm is deployed to obtain results and compared to select suitable algorithm for efficient use of resources.Comment: 8 pages, 4 figure

    Recent Developments in Cloud Based Systems: State of Art

    Full text link
    Cloud computing is the new buzzword in the head of the techies round the clock these days. The importance and the different applications of cloud computing are overwhelming and thus, it is a topic of huge significance. It provides several astounding features like Multitenancy, on demand service, pay per use etc. This manuscript presents an exhaustive survey on cloud computing technology and potential research issues in cloud computing that needs to be addressed

    Software-Defined and Virtualized Future Mobile and Wireless Networks: A Survey

    Full text link
    With the proliferation of mobile demands and increasingly multifarious services and applications, mobile Internet has been an irreversible trend. Unfortunately, the current mobile and wireless network (MWN) faces a series of pressing challenges caused by the inherent design. In this paper, we extend two latest and promising innovations of Internet, software-defined networking and network virtualization, to mobile and wireless scenarios. We first describe the challenges and expectations of MWN, and analyze the opportunities provided by the software-defined wireless network (SDWN) and wireless network virtualization (WNV). Then, this paper focuses on SDWN and WNV by presenting the main ideas, advantages, ongoing researches and key technologies, and open issues respectively. Moreover, we interpret that these two technologies highly complement each other, and further investigate efficient joint design between them. This paper confirms that SDWN and WNV may efficiently address the crucial challenges of MWN and significantly benefit the future mobile and wireless network.Comment: 12 pages, 3 figures, submitted to "Mobile Networks and Applications" (MONET
    • …
    corecore