27 research outputs found

    Entropy-Based Resource Management in Complex Cloud Environment

    Get PDF
    Resource Management is an NP-complete problem, the complexity of which increases substantially in the Cloud environment. The complexity of cloud resource management can originate from many factors: the scale of the resources; the heterogeneity of the resource types and the interdependencies of these; as well as the variability, dynamicity, and unpredictability of resource run-time performance. Complexity has many negative effects in relation to satisfying the Quality of Service (QoS) requirements of cloud applications, such as cost, performance, availability, and reliability. If an application cannot guarantee its QoS, it will be hard to populate. However, the vast majority of research efforts into cloud resource management implicitly assume the Cloud to be a simplifying technology and that the cloud resource's performance is determined and predictable. These incorrect assumptions may significantly affect the QoS of any cloud application developed under it, causing its resource management strategy to be less than robust. In spite of there being extensive research into complexity issues in many diverse fields ranging from computational biology to decision making in economics, the study of complexity in cloud resource management systems is limited. In this thesis, I address the complexity problems of Cloud Resource Management Systems by introducing the use of Entropy Theory in relation to them. The main contributions of this thesis are as follows: 1. A cloud simulation tool-kit, ComplexCloudSim, is implemented in order to help tackle the research question: what is the role of complexity in QoS-aware cloud resource management? The uncovering of Chaotic Behaviour in Cloud Resource Management Systems by using the Damage Spreading Analysis method. 2. The comprehensive definition of complexity in the Cloud Resource Management Systems; such can be primarily classified into two categories: Global System Complexity and Local Resource Complexity. 3. An Entropy Theory based resource management model is proposed for the purposes of identifying, measuring, analyzing and controlling (i.e., reducing and avoiding) complexity. 4. A Cellular Automata Entropy based methodology is proposed as a solution to the Cloud resource allocation problem; this methodology is capable of managing Global System Complexity. 6. Once the root cause of the complexity has been identified using the Local Activity Principle, a Resource Entropy Based Local Activity Ranking system can be proposed which solves the job scheduling problem by managing Local Resource Complexity. Finally, on this latter basis, I implement a system which I have termed an Entropy Scheduler within a popular real-world cloud analysis engine, Apache Spark. Experiments demonstrate that the new Entropy Scheduler significantly reduces the average query response time by 15% - 20% and standard deviation by 30% - 45% compare with the native Fair Scheduler for running CPU intensive applications in Apache Spark, when the Spark server is not overloaded

    Data intensive ATLAS workflows in the Cloud

    Get PDF

    Rethinking Routing and Peering in the era of Vertical Integration of Network Functions

    Get PDF
    Content providers typically control the digital content consumption services and are getting the most revenue by implementing an all-you-can-eat model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service

    The long sale: future-setting strategies for enterprise technologies

    Get PDF
    Markets for enterprise technologies are complex socio-technical arrangements where the nature of the goods or services available for exchange is frequently uncertain. Early offerings may appear obfuscated, in part ontologically due to contested boundary definitions, and in part through the intentional and unintentional work of sales actors. While it is difficult for actors to know what they are transacting with certainty before an exchange occurs, expectations are partly shaped in practice during a protracted and multipartite sales process. In the early stages, such technologies may be nothing more than ‘slideware’ or ‘vapourware’, with the promise of the offering yet to be realised. Suppliers are therefore faced with the challenge of how to bring an immature product to the serious attention of users. One such example which has dominated the ICT landscape in recent times is ‘cloud computing’, a vision for on-demand utility computing which on the one hand promised computing resources accessible like an infrastructure commodity such as electricity, but on the other declared by some as simply everything we already do in computing today. This thesis offers a longitudinal case study of the way in which a major ICT supplier, IBM, attempted to galvanise the market for its cloud-enabled products amongst user organisations. In doing so the supplier had the challenge of selling a model of outsourced services to organisations with deeply embedded ICT systems around which the sales processes had to be made to fit. The research centers on four empirical chapters which bring together contextual narratives of cloud computing, findings related to the sales work users do, the sales challenges encountered during crisis management, and the shadow activity that occurs during professional user groups and conferences. The discussion explains how actors work together to construct an imagined community of technology artefacts and practices that extends our understanding of how technology constituencies hold together without overt forms of control. The study draws together a number of years of fieldwork investigating user group events in the corporate ICT arena and a major UK customer implementation. These are explored through a mobile ethnography under the banner of a Biography of Artefacts and Practices (Pollock & Williams, 2008) making use of participant observation, and selective interviewing, with a particular focus on naturally occurring data
    corecore