247 research outputs found

    Quantifying cloud performance and dependability:Taxonomy, metric design, and emerging challenges

    Get PDF
    In only a decade, cloud computing has emerged from a pursuit for a service-driven information and communication technology (ICT), becoming a significant fraction of the ICT market. Responding to the growth of the market, many alternative cloud services and their underlying systems are currently vying for the attention of cloud users and providers. To make informed choices between competing cloud service providers, permit the cost-benefit analysis of cloud-based systems, and enable system DevOps to evaluate and tune the performance of these complex ecosystems, appropriate performance metrics, benchmarks, tools, and methodologies are necessary. This requires re-examining old system properties and considering new system properties, possibly leading to the re-design of classic benchmarking metrics such as expressing performance as throughput and latency (response time). In this work, we address these requirements by focusing on four system properties: (i) elasticity of the cloud service, to accommodate large variations in the amount of service requested, (ii) performance isolation between the tenants of shared cloud systems and resulting performance variability, (iii) availability of cloud services and systems, and (iv) the operational risk of running a production system in a cloud environment. Focusing on key metrics for each of these properties, we review the state-of-the-art, then select or propose new metrics together with measurement approaches. We see the presented metrics as a foundation toward upcoming, future industry-standard cloud benchmarks

    Capacity Requirements of Traffic Handling Schemes in Multi-Service Networks

    Get PDF
    Cataloged from PDF version of article.This paper deals with the impact of traffic handling mechanisms on capacity for different network architectures. Three traffic handling models are considered: per-flow, class-based and best-effort (BE). These models can be used to meet service guarantees, the major differences being in their complexity of implementations and in the quantity of network resources that must be provided. In this study, the performance is fixed and the required capacity determined for various combinations of traffic handling architectures for edge-core networks. This study provides a comparison of different QoS architectures. One key result of this work is that on the basis of capacity requirements, there is no significant difference between semi-aggregate traffic handling and per-flow traffic handling. However, best-effort handling requires significantly more capacity as compared to the other methods. (C) 2004 Elsevier B.V. All rights reserve

    Adaptive Speculation for Efficient Internetware Application Execution in Clouds

    Get PDF
    Modern Cloud computing systems are massive in scale, featuring environments that can execute highly dynamic Internetware applications with huge numbers of interacting tasks. This has led to a substantial challenge the straggler problem, whereby a small subset of slow tasks significantly impede parallel job completion. This problem results in longer service responses, degraded system performance, and late timing failures that can easily threaten Quality of Service (QoS) compliance. Speculative execution (or speculation) is the prominent method deployed in Clouds to tolerate stragglers by creating task replicas at runtime. The method detects stragglers by specifying a predefined threshold to calculate the difference between individual tasks and the average task progression within a job. However, such a static threshold debilitates speculation effectiveness as it fails to capture the intrinsic diversity of timing constraints in Internetware applications, as well as dynamic environmental factors such as resource utilization. By considering such characteristics, different levels of strictness for replica creation can be imposed to adaptively achieve specified levels of QoS for different applications. In this paper we present an algorithm to improve the execution efficiency of Internetware applications by dynamically calculating the straggler threshold, considering key parameters including job QoS timing constraints, task execution progress, and optimal system resource utilization. We implement this dynamic straggler threshold into the YARN architecture to evaluate it’s effectiveness against existing state-of-the-art solutions. Results demonstrate that the proposed approach is capable of reducing parallel job response times by up to 20% compared to the static threshold, as well as a higher speculation success rate, achieving up to 66.67% against 16.67% in comparison to the static method

    Securing Internet Coordinate System: Embedding Phase

    Get PDF
    This paper addresses the issue of the security of Internet Coordinate Systems, by proposing a general method for malicious behavior detection during coordinate computations. We first show that the dynamics of a node, in a coordinate system without abnormal or malicious behavior, can be modeled by a Linear State Space model and tracked by a Kalman filter. Then we show, that the obtained model can be generalized in the sense that the parameters of a filter calibrated at a node can be used effectively to model and predict the dynamic behavior at another node, as long as the two nodes are not too far apart in the network. This leads to the proposal of a Surveyor infrastructure: Surveyor nodes are trusted, honest nodes that use each other exclusively to position themselves in the coordinate space, and are therefore immune to malicious behavior in the system. During their own coordinate embedding, other nodes can then use the filter parameters of a nearby Surveyor as a representation of normal, clean system behavior to detect and filter out abnormal or malicious activity. A combination of simulations and Planet- Lab experiments are used to demonstrate the validity, generality, and effectiveness of the proposed approach for two representative coordinate embedding systems, namely Vivaldi and NPS

    Performance of delay constrained multi-user networks under block fading channels

    Get PDF
    Abstract. Effective Capacity (EC) indicates the maximum communication rate subject to a certain delay constraint while the effective energy efficiency (EEE) is the ratio between this EC and power consumption. In this thesis, we analyze the EC and EEE of multi-user networks operating in the finite blocklength (FB) regime. We consider a layout in which a number of users communicate through a common controller. A closed form approximation for the per-user EC is obtained in Nakagami-mm fading collision channels. The interference between transmitted data packets degrades the EC of each user. We analyze this decrease proposing three methods to alleviate the interference effect for one of the users namely power control, delay relaxation and joint compensation. Our results show that systems with stringent delay constraints favor power controlled compensation while for shorter packets, the amount of compensation needed by both θ\theta relaxation and power increases is higher. Thus, it is more costly to compensate networks transmitting shorter packets. For the hybrid method, we maximize an objective function whose parameters are determined according to the design priorities (e.g. rate and latency requirements). Results reveal that there is a unique throughput maximizer which is obtained at an intermediate operational point applying both power control and delay relaxation in the joint compensation process. Furthermore, we characterize the per-user EEE for different power consumption models. The results show that accounting for empty buffer probability enhances the per-user EEE. Considering flexible transmission power and extending the maximum delay tolerance boosts the per-use EEE which is depicted in the thesis as well.Suorituskyvyn analysointi viiverajoitetussa usean käyttäjän verkossa lohkohäipyvissä kanavissa. Tiivistelmä. Efektiivinen kapasiteetti kertoo suurimman tietoliikenteen datanopeuden määritetyillä viiverajoituksilla, kun taas efektiivinen energiatehokkuus on efektiivisen kapasiteetin ja tehonkulutuksen suhde. Tässä diplomityössä analysoidaan efektiivistä kapasiteettia ja efektiivistä energiatehokkuutta monisolmuverkoissa, kun käytetään äärellistä lohkon pituutta. Työssä käytetään mallia, jossa tietty määrä käyttäjiä kommunikoi yhteisen kontrolliyksikön ohjaamana. Käyttäjäkohtaisen efektiivisen kapasiteetin approksimaatio datapakettien törmäyksiä mallintavassa Nakagami-m -häipyvässä kanavassa esitetään suljetussa muodossa. Lähetettyjen pakettien välinen häiriö pienentää kunkin käyttäjän efektiivistä kapasiteettia. Tätä ilmiötä pyritään lieventämään kolmella ehdotetulla menetelmällä eli tehonsäädöllä, viiveen relaksoinnilla ja näiden yhdistelmällä. Tutkimustulokset osoittavat, että tiukkojen viiverajoitusten voimassa ollessa tehopohjainen kompensointi toimii parhaiten kun taas lyhyille paketeille vaaditaan molempia menetelmiä. Niinpä lyhyitä paketteja lähettävien verkkojen kompensointimenetelmät ovat kalliita. Hybridimenetelmässä maksimoidaan kohdefunktio, jonka parametrit määritellään suunnittelukriteerien mukaan (esim. datanopeus- ja viivevaatimukset). Tulokset paljastavat, että löytyy yksittäinen verkon läpäisykyvyn maksimoiva keskialueen toimintapisteen kohta teho- ja viivepohjaista kompensointia yhdessä käytettäessä. Lisäksi työssä mallinnetaan solmukohtaista efektiivistä energiatehokkuutta eri tehonkulutusmalleilla. Tulokset osoittavat, että ei-tyhjän puskurin todennäköisyyden huomioon ottaminen parantaa käyttäjäkohtaista efektiivistä energiatehokkuutta. Työssä kuvataan myös, että joustavan lähetystehon käyttö yhdessä väljennetyn maksimiviivetoleranssin kanssa parantaa efektiivistä energiatehokkuutta

    Benchmarking Eventually Consistent Distributed Storage Systems

    Get PDF
    Cloud storage services and NoSQL systems typically offer only "Eventual Consistency", a rather weak guarantee covering a broad range of potential data consistency behavior. The degree of actual (in-)consistency, however, is unknown. This work presents novel solutions for determining the degree of (in-)consistency via simulation and benchmarking, as well as the necessary means to resolve inconsistencies leveraging this information

    Offline and online power aware resource allocation algorithms with migration and delay constraints

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In order to handle advanced mobile broadband services and Internet of Things (IoT), future Internet and 5G networks are expected to leverage the use of network virtualization, be much faster, have greater capacities, provide lower latencies, and significantly be power efficient than current mobile technologies. Therefore, this paper proposes three power aware algorithms for offline, online, and migration applications, solving the resource allocation problem within the frameworks of network function virtualization (NFV) environments in fractions of a second. The proposed algorithms target minimizing the total costs and power consumptions in the physical network through sufficiently allocating the least physical resources to host the demands of the virtual network services, and put into saving mode all other not utilized physical components. Simulations and evaluations of the offline algorithm compared to the state-of-art resulted on lower total costs by 32%. In addition to that, the online algorithm was tested through four different experiments, and the results argued that the overall power consumption of the physical network was highly dependent on the demands’ lifetimes, and the strictness of the required end-to-end delay. Regarding migrations during online, the results concluded that the proposed algorithms would be most effective when applied for maintenance and emergency conditions.Peer ReviewedPreprin

    Design Space Exploration for MPSoC Architectures

    Get PDF
    Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.Siirretty Doriast

    Quality of Service Aware Data Stream Processing for Highly Dynamic and Scalable Applications

    Get PDF
    Huge amounts of georeferenced data streams are arriving daily to data stream management systems that are deployed for serving highly scalable and dynamic applications. There are innumerable ways at which those loads can be exploited to gain deep insights in various domains. Decision makers require an interactive visualization of such data in the form of maps and dashboards for decision making and strategic planning. Data streams normally exhibit fluctuation and oscillation in arrival rates and skewness. Those are the two predominant factors that greatly impact the overall quality of service. This requires data stream management systems to be attuned to those factors in addition to the spatial shape of the data that may exaggerate the negative impact of those factors. Current systems do not natively support services with quality guarantees for dynamic scenarios, leaving the handling of those logistics to the user which is challenging and cumbersome. Three workloads are predominant for any data stream, batch processing, scalable storage and stream processing. In this thesis, we have designed a quality of service aware system, SpatialDSMS, that constitutes several subsystems that are covering those loads and any mixed load that results from intermixing them. Most importantly, we natively have incorporated quality of service optimizations for processing avalanches of geo-referenced data streams in highly dynamic application scenarios. This has been achieved transparently on top of the codebases of emerging de facto standard best-in-class representatives, thus relieving the overburdened shoulders of the users in the presentation layer from having to reason about those services. Instead, users express their queries with quality goals and our system optimizers compiles that down into query plans with an embedded quality guarantee and leaves logistic handling to the underlying layers. We have developed standard compliant prototypes for all the subsystems that constitutes SpatialDSMS
    corecore