28 research outputs found

    Cost-Effective Resource Allocation and Throughput Maximization in Mobile Cloudlets and Distributed Clouds

    Get PDF
    With the advance in communication networks and the use explosion of mobile devices, distributed clouds consisting of many small and medium datacenters in geographical locations and cloudlets defined as "mini" datacenters are envisioned as the next-generation cloud computing platform. In particular, distributed clouds enable disaster-resilient and scalable services by scaling the services into multiple datacenters, while cloudlets allow pervasive and continuous services with low access delay by further enabling mobile users to access the services within their proximity. To realize the promises provided by distributed clouds and mobile cloudlets, it is urgently to optimize various system performance of distributed clouds and cloudlets, such as system throughput and operational cost by developing efficient solutions. In this thesis, we aim to devise novel solutions to maximize the system throughput of mobile cloudlets, and minimize the operational costs of distributed clouds, while meeting the resource capacity constraints and users' resource demands. This however poses great challenges, that is, (1) how to maximize the system throughput of a mobile cloudlet, considering that a mobile cloudlet has limited resources to serve energy-constrained mobile devices, (2) how to efficiently and effectively manage and evaluate big data in distributed clouds, and (3) how to efficiently allocate the resources of a distributed cloud to meet the resource demands of various users. Existing studies mainly focused on implementing systems and lacked systematic optimization methods to optimize the performance of distributed clouds and mobile cloudlets. Novel techniques and approaches for performance optimization of distributed clouds and mobile cloudlets are desperately needed. To address these challenges, this thesis makes the following contributions. We firstly study online request admissions in a cloudlet with the aim of maximizing the system throughput, assuming that future user requests are not known in advance. We propose a novel admission cost model to accurately model dynamic resource consumption, and devise efficient algorithms for online request admissions. We secondly study a novel collaboration- and fairness-aware big data management problem in a distributed cloud to maximize the system throughput, while minimizing the operational cost of service providers, subject to resource capacities and users' fairness constraints, for which, we propose a novel optimization framework and devise a fast yet scalable approximation algorithm with an approximation ratio. We thirdly investigate online query evaluation for big data analysis in a distributed cloud to maximize the query acceptance ratio, while minimizing the query evaluation cost. For this problem, we propose a novel metric to model the costs of different resource consumptions in datacenters, and devise efficient online algorithms under both unsplittable and splittable source data assumptions. We fourthly address the problem of community-aware data placement of online social networks into a distributed cloud, with the aim of minimizing the operational cost of the cloud service provider, and devise a fast yet scalable algorithm for the problem, by leveraging the close community concept that considers both user read rates and update rates. We also deal with social network evolutions, by developing a dynamic evaluation algorithm for the problem. We finally evaluate the performance of all proposed algorithms in this thesis through experimental simulations, using real and/or synthetic datasets. Simulation results show that the proposed algorithms significantly outperform existing algorithms

    z-TORCH: An Automated NFV Orchestration and Monitoring Solution

    Get PDF
    Autonomous management and orchestration (MANO) of virtualized resources and services, especially in large-scale NFV environments, is a big challenge owing to the stringent delay and performance requirements expected of a variety of network services. The quality of decision (QoD) of a MANO system depends on the quality and timeliness of the information that it receives from the underlying monitoring system. The data generated by monitoring systems is a significant contributor to the network and processing load of MANO systems, impacting thus their performance. This raises a unique challenge: how to jointly optimize the QoD of MANO systems while at the same minimizing their monitoring loads at runtime? This is the main focus of this paper. In this context we propose a novel automated NFV orchestration solution called z-TORCH (zero Touch Orchestration) that jointly optimizes the orchestration and monitoring processes by exploiting machine learning techniques. The objective is to enhance the QoD of MANO systems achieving a near-optimal placement of VNFs at minimum monitoring costs.This work has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 761536 (5G-Transformer project

    Dynamic resource provisioning for data center workloads with data constraints

    Get PDF
    Dynamic resource provisioning, as an important data center software building block, helps to achieve high resource usage efficiency, leading to enormous monetary benefits. Most existing work for data center dynamic provisioning target on stateless servers, where any request can be routed to any server. However, the assumption of stateless behaviors no longer holds for subsystems that subject to data constraints, as a request may depend on a certain dataset stored on a small subset of servers. Routing a request to a server without the required dataset violates data locality or data availability properties, which may negatively impact on the response times. To solve this problem, this thesis provides an unified framework consisting of two main steps: 1) determining the proper amount of resources to serve the workload by analyzing the schedulability utilization bound; 2) avoiding transition penalties during cluster resizing operations by deliberately design data distribution policies. We apply this framework to both storage and computing subsystems, where the former includes distributed file systems, databases, memory caches, and the latter refers to systems such as Hadoop, Spark, and Storm. Proposed solutions are implemented into MemCached, HBase/HDFS, and Spark, and evaluated using various datasets, including Wikipedia, NYC taxi trace, Twitter traces, etc

    Scalable big data systems: Architectures and optimizations

    Get PDF
    Big data analytics has become not just a popular buzzword but also a strategic direction in information technology for many enterprises and government organizations. Even though many new computing and storage systems have been developed for big data analytics, scalable big data processing has become more and more challenging as a result of the huge and rapidly growing size of real-world data. Dedicated to the development of architectures and optimization techniques for scaling big data processing systems, especially in the era of cloud computing, this dissertation makes three unique contributions. First, it introduces a suite of graph partitioning algorithms that can run much faster than existing data distribution methods and inherently scale to the growth of big data. The main idea of these approaches is to partition a big graph by preserving the core computational data structure as much as possible to maximize intra-server computation and minimize inter-server communication. In addition, it proposes a distributed iterative graph computation framework that effectively utilizes secondary storage to maximize access locality and speed up distributed iterative graph computations. The framework not only considerably reduces memory requirements for iterative graph algorithms but also significantly improves the performance of iterative graph computations. Last but not the least, it establishes a suite of optimization techniques for scalable spatial data processing along with three orthogonal dimensions: (i) scalable processing of spatial alarms for mobile users traveling on road networks, (ii) scalable location tagging for improving the quality of Twitter data analytics and prediction accuracy, and (iii) lightweight spatial indexing for enhancing the performance of big spatial data queries.Ph.D

    Geo-distributed Edge and Cloud Resource Management for Low-latency Stream Processing

    Get PDF
    The proliferation of Internet-of-Things (IoT) devices is rapidly increasing the demands for efficient processing of low latency stream data generated close to the edge of the network. Edge Computing provides a layer of infrastructure to fill latency gaps between the IoT devices and the back-end cloud computing infrastructure. A large number of IoT applications require continuous processing of data streams in real-time. Edge computing-based stream processing techniques that carefully consider the heterogeneity of the computing and network resources available in the geo-distributed infrastructure provide significant benefits in optimizing the throughput and end-to-end latency of the data streams. Managing geo-distributed resources operated by individual service providers raises new challenges in terms of effective global resource sharing and achieving global efficiency in the resource allocation process. In this dissertation, we present a distributed stream processing framework that optimizes the performance of stream processing applications through a careful allocation of computing and network resources available at the edge of the network. The proposed approach differentiates itself from the state-of-the-art through its careful consideration of data locality and resource constraints during physical plan generation and operator placement for the stream queries. Additionally, it considers co-flow dependencies that exist between the data streams to optimize the network resource allocation through an application-level rate control mechanism. The proposed framework incorporates resilience through a cost-aware partial active replication strategy that minimizes the recovery cost when applications incur failures. The framework employs a reinforcement learning-based online learning model for dynamically determining the level of parallelism to adapt to changing workload conditions. The second dimension of this dissertation proposes a novel model for allocating computing resources in edge and cloud computing environments. In edge computing environments, it allows service providers to establish resource sharing contracts with infrastructure providers apriori in a latency-aware manner. In geo-distributed cloud environments, it allows cloud service providers to establish resource sharing contracts with individual datacenters apriori for defined time intervals in a cost-aware manner. Based on these mechanisms, we develop a decentralized implementation of the contract-based resource allocation model for geo-distributed resources using Smart Contracts in Ethereum

    Intelligent Straggler Mitigation in Massive-Scale Computing Systems

    Get PDF
    In order to satisfy increasing demands for Cloud services, modern computing systems are often massive in scale, typically consisting of hundreds to thousands of heterogeneous machine nodes. Parallel computing frameworks such as MapReduce are widely deployed over such cluster infrastructure to provide reliable yet prompt services to customers. However, complex characteristics of Cloud workloads, including multi-dimensional resource requirements and highly changeable system environments, e.g. dynamic node performance, are introducing new challenges to service providers in terms of both customer experience and system efficiency. One primary challenge is the straggler problem, whereby a small subset of the parallelized tasks take abnormally longer execution time in comparison with the siblings, leading to extended job response and potential late-timing failure. The state-of-the-art approach to straggler mitigation is speculative execution. Although it has been deployed in several real-world systems with a variety of implementation optimizations, the analysis from this thesis has shown that speculative execution is often inefficient. According to various production tracelogs of data centers, the failure rate of speculative execution could be as high as 71%. Straggler mitigation is a complicated problem in its own nature: 1) stragglers may lead to different consequences to parallel job execution, possibly with different degrees of severity, 2) whether a task should be regarded as a straggler is highly subjective, depending upon different application and system conditions, 3) the efficiency of speculative execution would be improved if dynamic node performance could be modelled and predicted appropriately, and 4) there are other types of stragglers, e.g. those caused by data skews, that are beyond the capability of speculative execution. This thesis starts with a quantitative and rigorous analysis of issues with stragglers, including their root-causes and impacts, the execution environment running them, and the limitations to their mitigation. Scientific principles of straggler mitigation are investigated and new algorithms are developed. An intelligent system for straggler mitigation is then designed and developed, being compatible with the majority of current parallel computing frameworks. Combined with historical data analysis and online adaptation, the system is capable of mitigating stragglers intelligently, dynamically judging a task as a straggler and handling it, avoiding current weak nodes, and dealing with data skew, a special type of straggler, with a dedicated method. Comprehensive analysis and evaluation of the system show that it is able to reduce job response time by up to 55%, as compared with the speculator used in the default YARN system, while the optimal improvement a speculative-based method may achieve is around 66% in theory. The system also achieves a much higher success rate of speculation than other production systems, up to 89%

    MACHS: Mitigating the Achilles Heel of the Cloud through High Availability and Performance-aware Solutions

    Get PDF
    Cloud computing is continuously growing as a business model for hosting information and communication technology applications. However, many concerns arise regarding the quality of service (QoS) offered by the cloud. One major challenge is the high availability (HA) of cloud-based applications. The key to achieving availability requirements is to develop an approach that is immune to cloud failures while minimizing the service level agreement (SLA) violations. To this end, this thesis addresses the HA of cloud-based applications from different perspectives. First, the thesis proposes a component’s HA-ware scheduler (CHASE) to manage the deployments of carrier-grade cloud applications while maximizing their HA and satisfying the QoS requirements. Second, a Stochastic Petri Net (SPN) model is proposed to capture the stochastic characteristics of cloud services and quantify the expected availability offered by an application deployment. The SPN model is then associated with an extensible policy-driven cloud scoring system that integrates other cloud challenges (i.e. green and cost concerns) with HA objectives. The proposed HA-aware solutions are extended to include a live virtual machine migration model that provides a trade-off between the migration time and the downtime while maintaining HA objective. Furthermore, the thesis proposes a generic input template for cloud simulators, GITS, to facilitate the creation of cloud scenarios while ensuring reusability, simplicity, and portability. Finally, an availability-aware CloudSim extension, ACE, is proposed. ACE extends CloudSim simulator with failure injection, computational paths, repair, failover, load balancing, and other availability-based modules

    Transforming Large-Scale Virtualized Networks: Advancements in Latency Reduction, Availability Enhancement, and Security Fortification

    Get PDF
    In today’s digital age, the increasing demand for networks, driven by the proliferation of connected devices, data-intensive applications, and transformative technologies, necessitates robust and efficient network infrastructure. This thesis addresses the challenges posed by virtualization in 5G networking and focuses on enhancing next-generation Radio Access Networks (RANs), particularly Open-RAN (O-RAN). The objective is to transform virtualized networks into highly reliable, secure, and latency-aware systems. To achieve this, the thesis proposes novel strategies for virtual function placement, traffic steering, and virtual function security within O-RAN. These solutions utilize optimization techniques such as binary integer programming, mixed integer binary programming, column generation, and machine learning algorithms, including supervised learning and deep reinforcement learning. By implementing these contributions, network service providers can deploy O-RAN with enhanced reliability, speed, and security, specifically tailored for Ultra-Reliable and Low Latency Communications use cases. The optimized RAN virtualization achieved through this research unlocks a new era in network architecture that can confidently support URLLC applications, including Autonomous Vehicles, Industrial Automation and Robotics, Public Safety and Emergency Services, and Smart Grids

    Automatic Generation of Distributed Runtime Infrastructure for Internet of Things

    Get PDF
    Ph. D. ThesisThe Internet of Things (IoT) represents a network of connected devices that are able to cooperate and interact with each other in order to reach a particular goal. To attain this, the devices are equipped with identifying, sensing, networking and processing capabilities. Cloud computing, on the other hand, is the delivering of on-demand computing services – from applications, to storage, to processing power – typically over the internet. Clouds bring a number of advantages to distributed computing because of highly available pool of virtualized computing resource. Due to the large number of connected devices, real-world IoT use cases may generate overwhelmingly large amounts of data. This prompts the use of cloud resources for processing, storage and analysis of the data. Therefore, a typical IoT system comprises of a front-end (devices that collect and transmit data), and back-end – typically distributed Data Stream Management Systems (DSMSs) deployed on the cloud infrastructure, for data processing and analysis. Increasingly, new IoT devices are being manufactured to provide limited execution environment on top of their data sensing and transmitting capabilities. This consequently demands a change in the way data is being processed in a typical IoT-cloud setup. The traditional, centralised cloud-based data processing model – where IoT devices are used only for data collection – does not provide an efficient utilisation of all available resources. In addition, the fundamental requirements of real-time data processing such as short response time may not always be met. This prompts a new processing model which is based on decentralising the data processing tasks. The new decentralised architectural pattern allows some parts of data streaming computation to be executed directly on edge devices – closer to where the data is collected. Extending the processing capabilities to the IoT devices increases the robustness of applications as well as reduces the communication overhead between different components of an IoT system. However, this new pattern poses new challenges in the development, deployment and management of IoT applications. Firstly, there exists a large resource gap between the two parts of a typical IoT system (i.e. clouds and IoT devices); hence, prompting a new approach for IoT applications deployment and management. Secondly, the new decentralised approach necessitates the deployment of DSMS on distributed clusters of heterogeneous nodes resulting in unpredictable runtime performance and complex fault characteristics. Lastly, the environment where DSMSs are deployed is very dynamic due to user or device mobility, workload variation, and resource availability. In this thesis we present solutions to address the aforementioned challenges. We investigate how a high-level description of a data streaming computation can be used to automatically generate a distributed runtime infrastructure for Internet of Things. Subsequently, we develop a deployment and management system capable of distributing different operators of a data streaming computation onto different IoT gateway devices and cloud infrastructure. To address the other challenges, we propose a non-intrusive approach for performance evaluation of DSMSs and present a protocol and a set of algorithms for dynamic migration of stateful data stream operators. To improve our migration approach, we provide an optimisation technique which provides minimal application downtime and improves the accuracy of a data stream computation

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges
    corecore