1,081 research outputs found

    Research on the development of carrier intelligent cloud network under the background of IPv6+

    Get PDF
    With the increasingly mature 5G technology in our country, the government has comprehensively promoted IPv6 scale deployment, the rapid improvement of network quality of the three operators, and gradually transformed to IPv6+, the carrying network is more fl exible, and the user opening service is more convenient, which has promoted the development of intelligent cloud network of China’s carriers. Operators should actively respond to the challenges of IPv6+ era, based on their own intelligent cloud network development needs, the use of SRv6 technology, promote cloud network integration, carrying a variety of online services; Provide integrated cloud network products and services, build an intelligent operation and maintenance system, and improve user satisfaction; To build IPv6 networking capability of the whole network and build intelligent cloud network; Do a good job in the construction of IPv6 network information security, improve the security defense capability of intelligent cloud network, ensure the smooth operation of network, and inject new vitality into the 2B industry market for operators

    Efficient cloud computing system operation strategies

    Get PDF
    Cloud computing systems have emerged as a new paradigm of computing systems by providing on demand based services which utilize large size computing resources. Service providers offer Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) to users depending on their demand and users pay only for the user resources. The Cloud system has become a successful business model and is expanding its scope through collaboration with various applications such as big data processing, Internet of Things (IoT), robotics, and 5G networks. Cloud computing systems are composed of large numbers of computing, network, and storage devices across the geographically distributed area and multiple tenants employ the cloud systems simultaneously with heterogeneous resource requirements. Thus, efficient operation of cloud computing systems is extremely difficult for service providers. In order to maximize service providers\u27 profit, the cloud systems should be able to serve large numbers of tenants while minimizing the OPerational EXpenditure (OPEX). For serving as many tenants as possible tenants using limited resources, the service providers should implement efficient resource allocation for users\u27 requirements. At the same time, cloud infrastructure consumes a significant amount of energy. According to recent disclosures, Google data centers consumed nearly 300 million watts and Facebook\u27s data centers consumed 60 million watts. Explosive traffic demand for data centers will keep increasing because of expansion of mobile and cloud traffic requirements. If service providers do not develop efficient ways for energy management in their infrastructures, this will cause significant power consumption in running their cloud infrastructures. In this thesis, we consider optimal datasets allocation in distributed cloud computing systems. Our objective is to minimize processing time and cost. Processing time includes virtual machine processing time, communication time, and data transfer time. In distributed Cloud systems, communication time and data transfer time are important component of processing time because data centers are distributed geographically. If we place data sets far from each other, this increases the communication and data transfer time. The cost objective includes virtual machine cost, communication cost, and data transfer cost. Cloud service providers charge for virtual machine usage according to usage time of virtual machine. Communication cost and transfer cost are charged based on transmission speed of data and data set size. The problem of allocating data sets to VMs in distributed heterogeneous clouds is formulated as a linear programming model with two objectives: the cost and processing time. After finding optimal solutions of each objective function, we use a heuristic approach to find the Pareto front of multi-objective linear programming problem. In the simulation experiment, we consider a heterogeneous cloud infrastructure with five different types of cloud service provider resource information, and we optimize data set placement by guaranteeing Pareto optimality of the solutions. Also, this thesis proposes an adaptive data center activation model that consolidates adaptive activation of switches and hosts simultaneously integrated with a statistical request prediction algorithm. The learning algorithm predicts user requests in predetermined interval by using a cyclic window learning algorithm. Then the data center activates an optimal number of switches and hosts in order to minimize power consumption that is based on prediction. We designed an adaptive data center activation model by using a cognitive cycle composed of three steps: data collection, prediction, and activation. In the request prediction step, the prediction algorithm forecasts a Poisson distribution parameter lambda in every determined interval by using Maximum Likelihood Estimation (MLE) and Local Linear Regression (LLR) methods. Then, adaptive activation of the data center is implemented with the predicted parameter in every interval. The adaptive activation model is formulated as a Mixed Integer Linear Programming (MILP) model. Switches and hosts are modeled as M/M/1 and M/M/c queues. In order to minimize power consumption of data centers, the model minimizes the number of activated switches, hosts, and memory modules while guaranteeing Quality of Service (QoS). Since the problem is NP-hard, we use the Simulated Annealing algorithm to solve the model. We employ Google cluster trace data to simulate our prediction model. Then, the predicted data is employed to test adaptive activation model and observed energy saving rate in every interval. In the experiment, we could observe that the adaptive activation model saves 30 to 50% of energy compared to the full operation state of data center in practical utilization rates of data centers. Network Function Virtualization (NFV) emerged as a game changer in network market for efficient operation of the network infrastructure. Since NFV transforms the dedicated physical devices designed for specific network function to software-based Virtual Machines (VMs), the network operators expect to reduce a significant Capital Expenditure (CAPEX) and Operational Expenditure (OPEX). Softwarized VMs can be implemented on any commodity servers, so network operators can design flexible and scalable network architecture through efficient VM placement and migration algorithms. In this thesis, we study a joint problem of Virtualized Network Function (VNF) resource allocation and NFV-Service Chain (NFV-SC) placement problem in Software Defined Network (SDN) based hyper-scale distributed cloud computing infrastructure. The objective of the problem is minimizing the power consumption of the infrastructure while enforcing Service Level Agreement (SLA) of users. We employ an M/G/1/K queuing network approximation analysis for the NFV-SC model. The communication time between VNFs is considered in the NFV-SC placement because it influences the performance of NFV-SC in the highly distributed infrastructure environment. The joint problem is modeled by a Mixed Integer Non-linear Programming (MINP) model. However, the problem is intractable in large size infrastructures due to NP-hardness of the problem. We therefore propose a heuristic algorithm which splits the problem into two sub-problems: resource allocation and the NFV-SC embedding. In the numerical analysis, we could observe that the proposed algorithm outperforms the traditional bin packing algorithms in terms of power consumption and SLA assurance. In this thesis, we propose efficient cloud infrastructure management strategies from a single data center point of view to hyper-scale distributed cloud computing infrastructure for profitable cloud system operation. The management schemes are proposed with various objectives such as Quality of Service (Qos), performance, latency, and power consumption. We use efficient mathematical modeling strategies such as Linear Programming (LP), Mixed Integer Linear Programming (MILP), Mixed Integer Non-linear Programming(MINP), convex programming, queuing theory, and probabilistic modeling strategies and prove the efficiency of the proposed strategies through various simulations

    Service repository for cloud service consumer life cycle management

    Full text link
    © IFIP International Federation for Information Processing 2015. With rapid uptake of various types of cloud services many organizations are facing issues arising from their dependence on externally provided cloud services. In order to enable operation in this rapidly evolving environment, end user organizations need new methods and tools that support entire life-cycle of cloud services from the perspective of service consumers. Service repositories play a key role in supporting service consumer SDLC (Systems Development Life-Cycle) maintaining information that is used during the various life-cycle phases. In this paper we briefly describe service consumer SDLC and propose a design of service repository that supports information requirements throughout the service life-cycle

    Towards Autonomic Service Provisioning Systems

    Full text link
    This paper discusses our experience in building SPIRE, an autonomic system for service provision. The architecture consists of a set of hosted Web Services subject to QoS constraints, and a certain number of servers used to run session-based traffic. Customers pay for having their jobs run, but require in turn certain quality guarantees: there are different SLAs specifying charges for running jobs and penalties for failing to meet promised performance metrics. The system is driven by an utility function, aiming at optimizing the average earned revenue per unit time. Demand and performance statistics are collected, while traffic parameters are estimated in order to make dynamic decisions concerning server allocation and admission control. Different utility functions are introduced and a number of experiments aiming at testing their performance are discussed. Results show that revenues can be dramatically improved by imposing suitable conditions for accepting incoming traffic; the proposed system performs well under different traffic settings, and it successfully adapts to changes in the operating environment.Comment: 11 pages, 9 Figures, http://www.wipo.int/pctdb/en/wo.jsp?WO=201002636

    The Application of Group Theory in Communication Operation Pipeline System

    Get PDF
    To resolve the "pipeline" crisis for telecom operators, this study pioneers the application of Group theory in communication operation pipeline system. The pipeline entity group model was built for information transmission in the pipeline system to analyze operation of pipeline entities. The equations of pipeline system network traffic were established according to the flux conservation principle and matrix of pipeline network. Based on pipeline entity group model, dimensionality of the matrix was reduced. The solution scheme of the flow state transition relationship of the pipeline system is obtained, which will be very useful for the telecom operators to construct high-level mobile e-commerce application model and architecture

    Outcome-driven Service Provider Performance under Conditions of Complexity and Uncertainty

    Get PDF
    Proceedings Paper (for Acquisition Research Program)This paper describes applying ROI analysis principles for SOA performance management, creating Service-level Agreements (SLAs) to articulate agreements between the Government and external service providers, and managing SLAs through a governance framework (Hanf & Buck, 2009, March). This white paper highlights key findings of research undertaken by The MITRE Corporation (MITRE) and the resulting recommendations for (1) applying Return-on-Investment (ROI) analysis principles as the foundation for more effective performance management of Government Service-oriented Architecture (SOA), (2) creating comprehensive Service-level Agreements (SLAs) to articulate agreements between the Government and external service providers, and (3) managing SLAs through a governance framework (Oakley-Bogdewic & Buck, 2009; Hanf & Buck, 2009, March 25). As illustrated in Figure 1, MITRE''s recommendations address the additional managerial complexity and uncertainty that SOA objectives and proposed solutions often create.Naval Postgraduate School Acquisition Research ProgramApproved for public release; distribution is unlimited
    • …
    corecore