4 research outputs found

    PiCasso: enabling information-centric multi-tenancy at the edge of community mesh networks

    Get PDF
    © 2019 Elsevier. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Edge computing is radically shaping the way Internet services are run by enabling computations to be available close to the users - thus mitigating the latency and performance challenges faced in today’s Internet infrastructure. Emerging markets, rural and remote communities are further away from the cloud and edge computing has indeed become an essential panacea. Many solutions have been recently proposed to facilitate efficient service delivery in edge data centers. However, we argue that those solutions cannot fully support the operations in Community Mesh Networks (CMNs) since the network connection may be less reliable and exhibit variable performance. In this paper, we propose to leverage lightweight virtualisation, Information-Centric Networking (ICN), and service deployment algorithms to overcome these limitations. The proposal is implemented in the PiCasso system, which utilises in-network caching and name based routing of ICN, combined with our HANET (HArdware and NETwork Resources) service deployment heuristic, to optimise the forwarding path of service delivery in a network zone. We analyse the data collected from the Guifi.net Sants network zone, to develop a smart heuristic for the service deployment in that zone. Through a real deployment in Guifi.net, we show that HANET improves the response time up to 53% and 28.7% for stateless and stateful services respectively. PiCasso achieves 43% traffic reduction on service delivery in our real deployment, compared to the traditional host-centric communication. The overall effect of our ICN platform is that most content and service delivery requests can be satisfied very close to the client device, many times just one hop away, decoupling QoS from intra-network traffic and origin server load.Peer ReviewedPostprint (author's final draft

    FfDL : A Flexible Multi-tenant Deep Learning Platform

    Full text link
    Deep learning (DL) is becoming increasingly popular in several application domains and has made several new application features involving computer vision, speech recognition and synthesis, self-driving automobiles, drug design, etc. feasible and accurate. As a result, large scale on-premise and cloud-hosted deep learning platforms have become essential infrastructure in many organizations. These systems accept, schedule, manage and execute DL training jobs at scale. This paper describes the design, implementation and our experiences with FfDL, a DL platform used at IBM. We describe how our design balances dependability with scalability, elasticity, flexibility and efficiency. We examine FfDL qualitatively through a retrospective look at the lessons learned from building, operating, and supporting FfDL; and quantitatively through a detailed empirical evaluation of FfDL, including the overheads introduced by the platform for various deep learning models, the load and performance observed in a real case study using FfDL within our organization, the frequency of various faults observed including unanticipated faults, and experiments demonstrating the benefits of various scheduling policies. FfDL has been open-sourced.Comment: MIDDLEWARE 201

    Automating inventory composition management for bulk purchasing cloud brokerage strategy

    Get PDF
    Cloud providers offer end-users various pricing schemes to allow them to tailor VMs to their needs, e.g., a pay-as-you-go billing scheme, called on-demand, and a discounted contract scheme, called reserved instances. This work presents a cloud broker that offers users both the flexibility of on-demand instances and some discounts found in reserved instances. The broker employs a buy-low-and-sell-high strategy that places user requests into a resource pool of pre-purchased discounted cloud resources. A key challenge to buy-in-bulk-sell-individually cloud broker business models is to estimate user requests accurately and then optimise the stock level accordingly. Given the complexity and variety of the cloud computing market space, the number of the regression model and inherently optimisation search space can be intricate. In this thesis, we propose two solutions to the problem. The first solution is a risk-based decision model. The broker takes a risk-oriented approach to dynamically adjust the resource pool by analysing user request time series data. This approach does not require a training process which is useful at processing the large data stream. The broker is evaluated with high-frequency real cloud datasets from Alibaba. The results show that the overall profit of the broker is closely related to the optimal case. Additionally, the risk factors work as intended. The system hires more reserved instances when it can afford while leaning to the on-demand otherwise. We can also conclude that there is a correlation between the risk factors and the profit. On the other hand, the risk factor possesses some limitations, i.e. manual risk configuration, limited broker setting. Secondly, we propose a broker system that utilises the concept of causal discovery. From the risk-based solution, we can see that if there are parameters correlated with the profit, then by adjusting those parameters, we can manipulate the profit. We infer a function mapping from the extracted key entities of broker data to an objective of a broker, e.g. profit. The technique is similar to the additive noise model, causal discovery method. These functions are assumed to describe an actual underlying behaviour of the profit with respect to the parameters. Similar to the risk-based, we use the Alibaba trace data to simulate long term user requests. Our results show that the system can infer the underlying interaction model between variables unlock the profit model behaviour of the broker system
    corecore