13 research outputs found

    Feedback-control & queueing theory-based resource management for streaming applications

    Get PDF
    Recent advances in sensor technologies and instrumentation have led to an extraordinary growth of data sources and streaming applications. A wide variety of devices, from smart phones to dedicated sensors, have the capability of collecting and streaming large amounts of data at unprecedented rates. A number of distinct streaming data models have been proposed. Typical applications for this include smart cites & built environments for instance, where sensor-based infrastructures continue to increase in scale and variety. Understanding how such streaming content can be processed within some time threshold remains a non-trivial and important research topic. We investigate how a cloud-based computational infrastructure can autonomically respond to such streaming content, offering Quality of Service guarantees. We propose an autonomic controller (based on feedback control and queueing theory) to elastically provision virtual machines to meet performance targets associated with a particular data stream. Evaluation is carried out using a federated Cloud-based infrastructure (implemented using CometCloud) – where the allocation of new resources can be based on: (i) differences between sites, i.e. types of resources supported (e.g. GPU vs. CPU only), (ii) cost of execution; (iii) failure rate and likely resilience, etc. In particular, we demonstrate how Little’s Law –a widely used result in queuing theory– can be adapted to support dynamic control in the context of such resource provisioning

    application cost-aware cloud provisioning

    Get PDF
    Οι πλατφόρμες νέφους επιτρέπουν στους ιδιοκτήτες εφαρμογών την ενοικίαση πόρων, προκειμένου να επεκτείνουν δυναμικά τη συνολική υπολογιστική ισχύ των υποδομών τους. Τα χαρακτηριστικά και οι τιμές των πόρων αυτών συνήθως ποικίλουν. Οι πάροχοι νέφους διασφαλίζουν την ποιότητα υπηρεσίας μέσω εγγυήσεων (Service Layer Agreements) και πληρώνουν ποινή όταν μια εγγύηση παραβιάζεται. Συνηθως, οι βασισμένες στο νέφος εφαρμογές να προσφέρουν και αυτές τέτοιες εγγυήσεις στους χρήστες. Σε ένα δυναμικό περιβάλλον, όπου ο χρήστης εκτελεί εφαρμογές στο ιδιωτικό νέφος και μπορούν να προσθαφαιρούν κόμβους από πάροχους (δημόσιου) νέφους 2 διαφορετικά είδη SLAs υπάρχουν (i) το SLA που προσφέρεται από την εφαρμογή στους τελικούς χρήστες και (ii) το SLA που προσφέρεται από τους παρόχους νέφους στην εφαρμογή. Έτσι, μια ποινή για παραβίαση SLA από την εφαρμογή στους τελικούς χρήστες μπορεί να είναι χαμηλότερη αν παραβιάζεται και το SLA του παρόχου δημοσίου νέφους. Αυτή η ιδιότητα καθιστά τον υπολογισμό του συνολικού κόστους λειτουργίας περίπλοκο αλλά επεκτείνει το χώρο αναζήτησης των επιλογών με το χαμηλότερο συνολικό κόστος. Σε αυτήν τη διπλωματική εργασία παρουσιάζουμε έναν αλγόριθμο παροχής πόρων NoSQL εφαρμογών, που στοχεύει στην ελαχιστοποίηση του συνολικού κόστους της εφαρμογής λαμβάνοντας υπόψη τις ιδιότητες ελαστικότητας της εφαρμογής αυτής σε ένα ετερογενές περιβάλλον και είναι βασισμένος σε ‘‘look-ahead’’ βελτιστοποίησηCloud computing platforms allow application owners to rent resources in order to expand dynamically the overall computational power of their infrastructure. The resources characteristics and lease prices usually vary. Cloud providers ensure the Quality of Service through Service Layer Agreements (SLAs) and pay a penalty when these agreements are violated. Usually, cloud-based applications also offer SLAs to the users. In a dynamic environment, where a user is running applications on her private cloud and add/remove nodes from (public) cloud providers, 2 types of SLAs exist (i) the SLA offered by the application to the end users and (ii) the SLA offered by the cloud providers to the application. Thus, a penalty for an SLA violation from the application to the end users might be lower if the SLA from the public cloud provider is also violated. This property makes the calculation of the total operational cost complex, but also expands the search space of choices with lower total cost. In this thesis we present an application-cost aware resource provisioning algorithm for NoSQL applications that aims to minimize the application total cost by taking into account the elasticity properties of that application in a heterogeneous environment and is based on look-ahead optimization

    Computational resource management for data-driven applications with deadline constraints

    Get PDF
    Recent advances in the type and variety of sensing technologies have led to an extraordinary growth in the volume of data being produced and led to a number of streaming applications that make use of this data. Sensors typically monitor environmental or physical phenomenon at predefined time intervals or triggered by user-defined events. Understanding how such streaming content (the raw data or events) can be processed within a time threshold remains an important research challenge. We investigate how a cloud-based computational infrastructure can autonomically respond to such streaming content, offering quality of service guarantees. In particular, we contextualize our approach using an electric vehicles (EVs) charging scenario, where such vehicles need to connect to the electrical grid to charge their batteries. There has been an emerging interest in EV aggregators (primarily intermediate brokers able to estimate aggregate charging demand for a collection of EVs) to coordinate the charging process. We consider predicting EV charging demand as a potential workload with execution time constraints. We assume that an EV aggregator manages a number of geographic areas and a pool of computational resources of a cloud computing cluster to support scheduling of EV charging. The objective is to ensure that there is enough computational capacity to satisfy the requirements for managing EV battery charging requests within specific time constraints

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Deferred lightweight indexing for log-structured key-value stores

    Get PDF
    The recent shift towards write-intensive workload on bigdata (e.g., financial trading, social user-generated data streams)has pushed the proliferation of log-structured key-value stores, represented by Google's BigTable [1], Apache HBase [2] andCassandra [3]. While providing key-based data access with aPut/Get interface, these key-value stores do not support value-based access methods, which significantly limits their applicability in modern web and database applications. In this paper, we present DELI, a DEferred Lightweight Indexing scheme on the log-structured key-value stores. To index intensively updated bigdata in real time, DELI aims at making the index maintenance as lightweight as possible. The key idea is to apply an append-only design for online index maintenance and to collect index garbage at carefully chosen time. DELI optimizes the performance of index garbage collection through tightly coupling its execution with a native routine process called compaction. The DELI'ssystem design is fault-tolerant and generic (to most key-valuestores), we implemented a prototype of DELI based on HBasewithout internal code modification. Our experiments show that the DELI offers significant performance advantage for the write-intensive index maintenance
    corecore