41 research outputs found

    Enabling End-To-End Orchestration of Multi-Cloud Applications

    Get PDF
    The orchestration of application components across heterogeneous cloud providers is a problem that has been tackled using various approaches, some of which led to the creation of cloud orchestration and management standards, such as TOSCA and CAMP. Standardization is a definitive method of providing an end-To-end solution capable of defining, deploying, and managing applications and their components across heterogeneous cloud providers. TOSCA and CAMP, however, perform different functions with regard to cloud applications. TOSCA is focused primarily on topology modeling and orchestration, whereas CAMP is focused on deployment and management of applications. This paper presents a novel solution that not only involves the combination of the emerging standards TOSCA and CAMP, but also introduces extensions to CAMP to allow for multi-cloud application orchestration through the use of declarative policies. Extensions to the CAMP platform are also made, which brings the standards closer together to enable a seamless integration. Our proposal provides an end-To-end cloud orchestration solution that supports a cloud application modeling and deployment process, allowing a cloud application to span and be deployed over multiple clouds. The feasibility and the benefit of our approach are demonstrated in our validation study

    Cooperative Buffering Schemes for Time-Shifted Live Streaming of Distributed Appliances

    No full text
    Distributed appliances connected to the Internet have provided various multimedia services. In particular, networked Personal Video Recorders (PVRs) can store broadcast TV programs in their storage devices or receive them from central servers, enabling people to watch the programs they want at any desired time. However, the conventional CDNs capable of supporting a large number of concurrent users have limitations in scalability because more servers are required in proportion to the increased users. To address this problem, we have developed a time-shifted live streaming system over P2P networks so that PVRs can share TV programs with each other. We propose cooperative buffering schemes to provide the streaming services for time-shifted periods even when the number of PVRs playing back at the periods is not sufficient; we do so by utilizing the idle resources of the PVRs playing at the live broadcast time. To determine which chunks to be buffered, they consider the degree of deficiency and proximity and the ratio of playback requests to chunk copies. Through extensive simulations, we show that our proposed buffering schemes can significantly extend the time-shifting hours and compare the performance of two buffering schemes in terms of playback continuity and startup delay

    An Adaptive Buffering Scheme for P2P Live and Time-Shifted Streaming

    No full text
    Recently, P2P streaming techniques have been a promising solution to a large-scale live streaming system because of their high scalability and low installation cost. In P2P live streaming systems, however, it is difficult to manage peersā€™ buffers effectively, because they can buffer only a limited amount of data around a live broadcasting time in the main memory and suffer from long playback lag due to the nature of P2P structures. In addition, the number of peers decreases rapidly as the playback position moves further from this time by performing time-shifted viewing. These situations widen the distribution of peersā€™ playback positions, thereby decreasing the degree of data duplication among peers. Moreover, it is hard to use each peerā€™s buffer as the caching area because the buffer area where the chunks that have already been played back are stored can be overwritten at any time by new chunks that will arrive soon. In this paper, we therefore propose a novel buffering scheme to significantly increase data duplication in buffering periods among peers in P2P live and time-shifted streaming systems. In our proposed scheme, the buffer ratio of each peer is adaptively adjusted according to its relative playback position in a group by increasing the ratio of the caching area in its buffer as its playback position moves earlier in time and increasing the ratio of the prefetching area as its playback position moves later. Through extensive simulations, we demonstrate that our proposed adaptive buffering scheme outperforms the conventional buffering technique considerably in terms of startup delay, average jitter ratio, and the ratio of necessary chunks in a buffermap

    Supporting Seamless Mobility for P2P Live Streaming

    No full text
    With advent of various mobile devices with powerful networking and computing capabilities, the users' demand to enjoy live video streaming services such as IPTV with mobile devices has been increasing rapidly. However, it is challenging to get over the degradation of service quality due to data loss caused by the handover. Although many handover schemes were proposed at protocol layers below the application layer, they inherently suffer from data loss while the network is being disconnected during the handover. We therefore propose an efficient application-layer handover scheme to support seamless mobility for P2P live streaming. By simulation experiments, we show that the P2P live streaming system with our proposed handover scheme can improve the playback continuity significantly compared to that without our scheme

    An Adaptive Buffering Scheme for P2P Live and Time-Shifted Streaming

    No full text
    Recently, P2P streaming techniques have been a promising solution to a large-scale live streaming system because of their high scalability and low installation cost. In P2P live streaming systems, however, it is difficult to manage peersā€™ buffers effectively, because they can buffer only a limited amount of data around a live broadcasting time in the main memory and suffer from long playback lag due to the nature of P2P structures. In addition, the number of peers decreases rapidly as the playback position moves further from this time by performing time-shifted viewing. These situations widen the distribution of peersā€™ playback positions, thereby decreasing the degree of data duplication among peers. Moreover, it is hard to use each peerā€™s buffer as the caching area because the buffer area where the chunks that have already been played back are stored can be overwritten at any time by new chunks that will arrive soon. In this paper, we therefore propose a novel buffering scheme to significantly increase data duplication in buffering periods among peers in P2P live and time-shifted streaming systems. In our proposed scheme, the buffer ratio of each peer is adaptively adjusted according to its relative playback position in a group by increasing the ratio of the caching area in its buffer as its playback position moves earlier in time and increasing the ratio of the prefetching area as its playback position moves later. Through extensive simulations, we demonstrate that our proposed adaptive buffering scheme outperforms the conventional buffering technique considerably in terms of startup delay, average jitter ratio, and the ratio of necessary chunks in a buffermap

    A Hybrid Push/Pull Streaming Scheme Using Interval Caching in P2P VOD Systems

    No full text

    SSD ????????? ????????? RAID??? ????????? ?????? ??????

    No full text
    ?????? ??????????????? ????????? ????????? ???????????? ???????????? ??????. ????????????????????? ??????????????? ????????? ???????????? ?????? ??? ????????????????????????, RAID??? ?????? ??????????????? ????????? ?????????????????? ????????? ?????? ????????? ???????????? ????????? ???????????? ????????? ???????????? ?????????. ??? ???????????? ????????? ????????? ????????? ???????????? ?????? ????????? ?????? ????????? ?????? ?????? ?????? ??? ????????? ?????? ?????? ????????? ??????????????? ?????? ??????. ??? ????????? ????????? ????????? ???????????? SSD??? ????????? ??????????????? ???????????? ?????? ????????? ????????? ???????????? ???????????? ???????????? ????????? ??????????????? ????????? ????????????. ????????? ????????? Linux ???????????? ??? ?????? ?????????????????? ??? ?????? SSD??? ????????? RAID5??? ???????????? ?????? ????????? ?????? ????????? ????????? ?????? ?????? ???????????? ????????? ?????? ?????? ???????????? ?????? ?????????????????? ?????? 14%??? ???????????? ????????? ?????? ????????? ?????????.clos

    Disk schedulers for solid state drivers

    No full text
    In embedded systems and laptops, flash memory storage such as SSDs (Solid State Drive) have been gaining popularity due to its low energy consumption and durability. As SSDs are flash memory based devices, their performance behavior differs from those of magnetic disks. However, little attention has been paid on how to exploit SSDs from the disk scheduling algorithm view point. In this paper, we first describe behaviors of SSDs that inspires us to design a new disk scheduler for the Linux operating system. Specifically, read service time is almost constant in an SSD while write service time is not. Moreover, appropriate grouping of write requests eliminates any ordering-related restrictions and also maximizes write performance. From these observations, we propose two disk schedulers: IRBW-FIFO and IRBW-FIFO-RP. Both schedulers arrange write requests into bundles of an appropriate size while read requests are independently scheduled. Then, the IRBW-FIFO scheduler provides complete FIFO ordering to each bundle of write requests and each individual read requests while the IRBW-FIFO-RP scheduler gives higher priority to read requests than the bundles of write requests. We implement these schedulers in Linux 2.6.23, and results of executing our set of benchmark programs shows that performance improvements of up to 17% compared to existing Linux disk schedulers are achieved

    Building fully functional instant on/off systems by making use of non-volatile RAM

    No full text
    In this paper we present our experience in building a fully functional computer system that can be turned on and off instantly to zero or minimal power state. This feature is not possible with current (embedded) computer systems such as cellular phones or smartphones that are based on full fledged operating systems, as booting such a system takes a long time. This inconvenience forces users to keep their devices on even when they are not needed causing power wastage, which has become an important social issue. In this paper, we show that system instant on/off can be attained by adopting new nonvolatile RAM, such as PCM or FeRAM, as part of main memory and modifying the operating system appropriately. Our experimental measurements show that our system, on average, turns on in 0.09 seconds and turns off in 1.21 seconds

    Enabling Role-Based Orchestration for Cloud Applications

    No full text
    With the rapidly growing popularity of cloud services, the cloud computing faces critical challenges to orchestrate the deployment and operation of cloud applications on heterogenous cloud platforms. Cloud applications are built on a platform model that abstracts away underlying platform-specific details, so that their orchestration can benefit from the abstract view and flexibility of the underlying platform configuration. However, considerable efforts are still required to properly manage complicated cloud applications. This paper proposes a model-driven approach to cloud application orchestration which promotes the concerns of distinct roles for cloud system provisioning and operation. By establishing a set of capabilities as modeling constructs, our approach allows TOSCA-based application topology itself and its orchestration needs to be specified in a way to provide a more targeted support for different needs and concerns of application developers and operators. With novel orchestration features like application topology description, platform capability modeling, and role-awareness for cloud application orchestration, it can significantly reduce the complexity of application orchestration in diverse cloud environments. To show the feasibility and effectiveness of our proposal for cloud application orchestration, we present a proof-of-concept orchestration system implementation and evaluate its deployment and orchestration results in a Kubernetes cluster
    corecore