1,234 research outputs found

    Revamping Cloud Gaming with Distributed Engines

    Get PDF
    While cloud gaming has brought considerable advantages for its customers, from the point of view of cloud providers, multiple aspects related to infrastructure management still fall short of such kind of service. Indeed, differently from traditional cloud-ready applications, modern game engines are still based on monolithic software architectures. This aspect precludes the applicability of fine-grained resource management and service orchestration schemes, ultimately leading to poor cost-effectiveness. To mitigate these shortcomings, we propose a Cloud-Oriented Distributed Engine for Gaming (CODEG). Thanks to its distributed nature, CODEG is capable of fully exploiting the resource heterogeneity present in cloud data centers, while providing the possibility of spanning its service on multiple network layers up to the edge clouds

    Design Criteria to Architect Continuous Experimentation for Self-Driving Vehicles

    Full text link
    The software powering today's vehicles surpasses mechatronics as the dominating engineering challenge due to its fast evolving and innovative nature. In addition, the software and system architecture for upcoming vehicles with automated driving functionality is already processing ~750MB/s - corresponding to over 180 simultaneous 4K-video streams from popular video-on-demand services. Hence, self-driving cars will run so much software to resemble "small data centers on wheels" rather than just transportation vehicles. Continuous Integration, Deployment, and Experimentation have been successfully adopted for software-only products as enabling methodology for feedback-based software development. For example, a popular search engine conducts ~250 experiments each day to improve the software based on its users' behavior. This work investigates design criteria for the software architecture and the corresponding software development and deployment process for complex cyber-physical systems, with the goal of enabling Continuous Experimentation as a way to achieve continuous software evolution. Our research involved reviewing related literature on the topic to extract relevant design requirements. The study is concluded by describing the software development and deployment process and software architecture adopted by our self-driving vehicle laboratory, both based on the extracted criteria.Comment: Copyright 2017 IEEE. Paper submitted and accepted at the 2017 IEEE International Conference on Software Architecture. 8 pages, 2 figures. Published in IEEE Xplore Digital Library, URL: http://ieeexplore.ieee.org/abstract/document/7930218

    Joining Jolie to Docker Orchestration of Microservices on a Containers-as-a-Service Layer

    Get PDF
    Cloud computing is steadily growing and, as IaaS vendors have started to offer pay-as-you-go billing policies, it is fundamental to achieve as much elasticity as possible, avoiding over-provisioning that would imply higher costs. In this paper, we briefly analyse the orchestration characteristics of PaaSSOA, a proposed architecture already implemented for Jolie microservices, and Kubernetes, one of the various orchestration plugins for Docker; then, we outline similarities and differences of the two approaches, with respect to their own domain of application. Furthermore, we investigate some ideas to achieve a federation of the two technologies, proposing an architectural composition of Jolie microservices on Docker Container-as-a-Service layer.Comment: 9 pages, 3 figure

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    Network architecture for large-scale distributed virtual environments

    Get PDF
    Distributed Virtual Environments (DVEs) provide 3D graphical computer generated environments with stereo sound, supporting real-time collaboration between potentially large numbers of users distributed around the world. Early DVEs has been used over local area networks (LANs). Recently with the Internet's development into the most common embedding for DVEs these distributed applications have been moved towards an exploiting IP networks. This has brought the scalability challenges into the DVEs evolution. The network bandwidth resource is the more limited resource of the DVE system and to improve the DVE's scalability it is necessary to manage carefully this resource. To achieve the saving in the network bandwidth the different types of the network traffic that is produced by the DVEs have to be considered. DVE applications demand· exchange of the data that forms different types of traffic such as a computer data type, video and audio, and a 3D data type to keep the consistency of the application's state. The problem is that the meeting of the QoS requirements of both control and continuous media traffic already have been covered by the existing research. But QoS for transfer of the 3D information has not really been considered. The 3D DVE geometry traffic is very bursty in nature and places a high demands on the network for short intervals of time due to the quite large size of the 3D models and the DVE application requirements to transmit a 3D data as quick as possible. The main motivation in carrying out the work presented in this thesis is to find a solution to improve the scalability of the DVE applications by a consideration the QoS requirements of the 3D DVE geometrical data type. In this work we are investigating the possibility to decrease the network bandwidth utilization by the 3D DVE traffic using the level of detail (LOD) concept and the active networking approach. The background work of the thesis surveys the DVE applications and the scalability requirements of the DVE systems. It also discusses the active networks and multiresolution representation and progressive transmission of the 3D data. The new active networking approach to the transmission of the 3D geometry data within the DVE systems is proposed in this thesis. This approach enhances the currently applied peer-to-peer DVE architecture by adding to the peer-to-peer multicast neny_ork layer filtering of the 3D flows an application level filtering on the active intermediate nodes. The active router keeps the application level information about the placements of users. This information is used by active routers to prune more detailed 3D data flows (higher LODs) in the multicast tree arches that are linked to the distance DVE participants. The exploration of possible benefits of exploiting the proposed active approach through the comparison with the non-active approach is carried out using the simulation­based performance modelling approach. Complex interactions between participants in DVE application and a large number of analyzed variables indicate that flexible simulation is more appropriate than mathematical modelling. To build a test bed will not be feasible. Results from the evaluation demonstrate that the proposed active approach shows potential benefits to the improvement of the DVE's scalability but the degree of improvement depends on the users' movement pattern. Therefore, other active networking methods to support the 3D DVE geometry transmission may also be required

    Squeezing the most benefit from network parallelism in datacenters

    Get PDF
    One big non-blocking switch is one of the most powerful and pervasive abstractions in datacenter networking. As Moore's law begins to wane, using parallelism to scale out processing units, vs. scale them up, is becoming exceedingly popular. The one-big-switch abstraction, for example, is typically implemented via leveraging massive degrees of parallelism behind the scene. In particular, in today's datacenters that exhibit a high degree of multi-pathing, each logical path between a communicating pair in the one-big-switch abstraction is mapped to a set of paths that can carry traffic in parallel. Similarly, each one-big-switch abstraction function, such as the firewall functionality, is mapped to a set of distributed hardware and software switches. Efficiently deploying this pool of networking connectivity and preserving the functional correctness of network functions, in spite of the parallelism, are challenging. Efficiently balancing the load among multiple paths is challenging because microbursts, responsible for the majority of packet loss in datacenters today, usually last for only a few microseconds. Even the fastest traffic engineering schemes today have control loops that are several orders of magnitude slower (a few milliseconds to a few seconds), and are therefore ineffective in controlling microbursts. Correctly implementing network functions in the face of parallelism is hard because the distributed set of elements that in parallel implement a one-big-switch abstraction can inevitably have inconsistent states that may cause them to behave differently than one physical switch. The first part of this thesis presents DRILL, a datacenter fabric for Clos networks which performs micro load balancing to distribute load as evenly as possible on microsecond timescales. To achieve this, DRILL employs packet-level decisions at each switch based on local queue occupancies and randomized algorithms to distribute load. Despite making per-packet forwarding decisions, by enforcing a tight control on queue occupancies, DRILL manages to keep the degree of packet reordering low. DRILL adapts to topological asymmetry (e.g. failures) in Clos networks by decomposing the network into symmetric components. Using a detailed switch hardware model, we simulate DRILL and show it outperforms recent edge-based load balancers particularly in the tail latency under heavy load, e.g., under 80% load, it reduces the 99.99th percentile of flow completion times of Presto and CONGA by 32% and 35%, respectively. Finally, we analyze DRILL's stability and throughput-efficiency. In the second part, we focus on the correctness of one-big-switch abstraction's implementation. We first show that naively using parallelism to scale networking elements can cause incorrect behavior. For example, we show that an IDS system which operates correctly as a single network element can erroneously and permanently block hosts when it is replicated. We then provide a system, COCONUT, for seamless scale-out of network forwarding elements; that is, an SDN application programmer can program to what functionally appears to be a single forwarding element, but which may be replicated behind the scenes. To do this, we identify the key property for seamless scale out, weak causality, and guarantee it through a practical and scalable implementation of vector clocks in the data plane. We build a prototype of COCONUT and experimentally demonstrate its correct behavior. We also show that its abstraction enables a more efficient implementation of seamless scale-out compared to a naive baseline. Finally, reasoning about network behavior requires a new model that enables us to distinguish between observable and unobservable events. So in the last part, we present the Input/Output Automaton (IOA) model and formalize networks' behaviors. Using this framework, we prove that COCONUT enables seamless scale out of networking elements, i.e., the user-perceived behavior of any COCONUT element implemented with a distributed set of concurrent replicas is provably indistinguishable from its singleton implementation
    • …
    corecore