5,186 research outputs found

    Composing Systemic Aspects into Component-Oriented DOC Middleware

    Get PDF
    The advent and maturation of component-based middleware frameworks have sim-plified the development of large-scale distributed applications by separating system devel-opment and configuration concerns into different aspects that can be specified and com-posed at various stages of the application development lifecycle. Conventional component middleware technologies, such as J2EE [73] and .NET [34], were designed to meet the quality of service (QoS) requirements of enterprise applications, which focus largely on scalability and reliability. Therefore, conventional component middleware specifications and implementations are not well suited for distributed real-time and embedded (DRE) ap-plications with more stringent QoS requirements, such as low latency/jitter, timeliness, and online fault recovery. In the DRE system development community, a new generation of enhanced commercial off-the-shelf (COTS) middleware, such as Real-time CORBA 1.0 (RT-CORBA)[39], is increasingly gaining acceptance as (1) the cost and time required to develop and verify DRE applications precludes developers from implementing complex DRE applications from scratch and (2) implementations of standard COTS middleware specifications mature and encompass key QoS properties needed by DRE systems. However, although COTS middleware standardizes mechanisms to configure and control underlying OS support for an application’s QoS requirements, it does not yet provide sufficient abstractions to separate QoS policy configurations such as real-time performance requirements, from application functionality. Developers are therefore forced to configure QoS policies in an ad hoc way, and the code to configure these policies is often scattered throughout and tangled with other parts of a DRE system. As a result, it is hard for developers to configure, validate, modify, and evolve complex DRE systems consistently. It is therefore necessary to create a new generation of QoS-enabled component middleware that provides more comprehensive support for addressing QoS-related concerns modularly, so that they can be introduced and configured as separate systemic aspects. By analyzing and identifying the limitations of applying conventional middleware technologies for DRE applications, this dissertation presents a new design and its associated techniques for enhancing conventional component-oriented middleware to provide programmability of DRE relevant real-time QoS concerns. This design is realized in an implementation of the standard CORBA Component Model (CCM) [38], called the Component-Integrated ACE ORB (CIAO). This dissertation also presents both architectural analysis and empirical results that demonstrate the effectiveness of this approach. This dissertation provides three contributions to the state of the art in composing systemic behaviors into component middleware frameworks. First, it illustrates how component middleware can simplify development and evolution of DRE applications while ensuring stringent QoS requirements by composing systemic QoS aspects. Second, it contributes to the design and implementation of QoS-enabled CCM by analyzing and documenting how systemic behaviors can be composed into component middleware. Finally, it presents empirical and analytical results to demonstrate the effectiveness and the advantage of composing systemic behaviors in component middleware. The work in this dissertation has a broader impact beyond the CCM in which it was developed, as it can be applied to other component-base middleware technologies which wish to support DRE applications

    Degrees of tenant isolation for cloud-hosted software services : a cross-case analysis

    Get PDF
    A challenge, when implementing multi-tenancy in a cloud-hosted software service, is how to ensure that the performance and resource consumption of one tenant does not adversely affect other tenants. Software designers and architects must achieve an optimal degree of tenant isolation for their chosen application requirements. The objective of this research is to reveal the trade-offs, commonalities, and differences to be considered when implementing the required degree of tenant isolation. This research uses a cross-case analysis of selected open source cloud-hosted software engineering tools to empirically evaluate varying degrees of isolation between tenants. Our research reveals five commonalities across the case studies: disk space reduction, use of locking, low cloud resource consumption, customization and use of plug-in architecture, and choice of multi-tenancy pattern. Two of these common factors compromise tenant isolation. The degree of isolation is reduced when there is no strategy to reduce disk space and customization and plug-in architecture is not adopted. In contrast, the degree of isolation improves when careful consideration is given to how to handle a high workload, locking of data and processes is used to prevent clashes between multiple tenants and selection of appropriate multi-tenancy pattern. The research also revealed five case study differences: size of generated data, cloud resource consumption, sensitivity to workload changes, the effect of the software process, client latency and bandwidth, and type of software process. The degree of isolation is impaired, in our results, by the large size of generated data, high resource consumption by certain software processes, high or fluctuating workload, low client latency, and bandwidth when transferring multiple files between repositories. Additionally, this research provides a novel explanatory framework for (i) mapping tenant isolation to different software development processes, cloud resources and layers of the cloud stack; and (ii) explaining the different trade-offs to consider affecting tenant isolation (i.e. resource sharing, the number of users/requests, customizability, the size of generated data, the scope of control of the cloud application stack and business constraints) when implementing multi-tenant cloud-hosted software services. This research suggests that software architects have to pay attention to the trade-offs, commonalities, and differences we identify to achieve their degree of tenant isolation requirements

    Transformer-based models and hardware acceleration analysis in autonomous driving: A survey

    Full text link
    Transformer architectures have exhibited promising performance in various autonomous driving applications in recent years. On the other hand, its dedicated hardware acceleration on portable computational platforms has become the next critical step for practical deployment in real autonomous vehicles. This survey paper provides a comprehensive overview, benchmark, and analysis of Transformer-based models specifically tailored for autonomous driving tasks such as lane detection, segmentation, tracking, planning, and decision-making. We review different architectures for organizing Transformer inputs and outputs, such as encoder-decoder and encoder-only structures, and explore their respective advantages and disadvantages. Furthermore, we discuss Transformer-related operators and their hardware acceleration schemes in depth, taking into account key factors such as quantization and runtime. We specifically illustrate the operator level comparison between layers from convolutional neural network, Swin-Transformer, and Transformer with 4D encoder. The paper also highlights the challenges, trends, and current insights in Transformer-based models, addressing their hardware deployment and acceleration issues within the context of long-term autonomous driving applications

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    v-Mapper: an application-aware resource consolidation scheme for cloud data centers

    Get PDF
    Cloud computing systems are popular in computing industry for their ease of use and wide range of applications. These systems offer services that can be used over the Internet. Due to their wide popularity and usage, cloud computing systems and their services often face issues resource management related challenges. In this paper, we present v-Mapper, a resource consolidation scheme which implements network resource management concepts through software-defined networking (SDN) control features. The paper makes three major contributions: (1) We propose a virtual machine (VM) placement scheme that can effectively mitigate the VM placement issues for data-intensive applications; (2) We propose a validation scheme that will ensure that a cloud service is entertained only if there are sufficient resources available for its execution and (3) We present a scheduling policy that aims to eliminate network load constraints. We tested our scheme with other techniques in terms of average task processing time, service delay and bandwidth usage. Our results demonstrate that v-Mapper outperforms other techniques and delivers significant improvement in system’s performance

    Network emulation focusing on QoS-Oriented satellite communication

    Get PDF
    This chapter proposes network emulation basics and a complete case study of QoS-oriented Satellite Communication

    Improved Designs for Application Virtualization

    Get PDF
    We propose solutions for application virtualization to mitigate the performance loss in streaming and browser-based applications. For the application streaming, we propose a solution which keeps operating system components and application software at the server and streams them to the client side for execution. This architecture minimizes the components managed at the clients and improves the platform-level incompatibility. The runtime performance of application streaming is significantly reduced when the required code is not properly available on the client side. To mitigate this issue and boost the runtime performance, we propose prefetching, i.e., speculatively delivering code blocks to the clients in advance. The probability model on which our prefetch method is based may be very large. To manage such a probability model and the associated hardware resources, we perform an information gain analysis. We draw two lower bounds of the information gain brought by an attribute set required to achieve a prefetch hit rate. We organize the probability model as a look-up table: LUT). Similar to the memory hierarchy which is widely used in the computing field, we separate the single LUT into two-level, hierarchical LUTs. To separate the entries without sorting all entries, we propose an entropy-based fast LUT separation algorithm which utilizes the entropy as an indicator. Since the domain of the attribute can be much larger than the addressable space of a virtual memory system, we need an efficient way to allocate each LUT\u27s entry in a limited memory address space. Instead of using expensive CAM, we use a hash function to convert the attribute values into addresses. We propose an improved version of the Pearson hashing to reduce the collision rate with little extra complexity. Long interactive delays due to network delays are a significant drawback for the browser-based application virtualization. To address this, we propose a distributed infrastructure arrangement for browser-based application virtualization which reduces the average communication distance among servers and clients. We investigate a hand-off protocol to deal with the user mobility in the browser-based application virtualization. Analyses and simulations for information-based prefetching and for mobile applications are provided to quantify the benefits of the proposed solutions

    Container-based microservice architecture for local IoT services

    Get PDF
    Abstract. Edge services are needed to save networking and computational resources on higher tiers, enable operation during network problems, and to help limiting private data propagation to higher tiers if the function needing it can be handled locally. MEC at access network level provides most of these features but cannot help when access network is down. Local services, in addition, help alleviating the MEC load and limit the data propagation even more, on local level. This thesis focuses on the local IoT service provisioning. Local service provisioning is subject to several requirements, related to resource/energy-efficiency, performance and reliability. This thesis introduces a novel way to design and implement a Docker container-based micro-service system for gadget-free future IoT (Internet of Things) network. It introduces a use case scenario and proposes few possible required micro-services as of solution to the scenario. Some of these services deployed on different virtual platforms along with software components that can process sensor data providing storage capacity to make decisions based on their algorithm and business logic while few other services deployed with gateway components to connect rest of the devices to the system of solution. It also includes a state-of-the-art study for design, implementation, and evaluation as a Proof-of-Concept (PoC) based on container-based microservices with Docker. The used IoT devices are Raspberry Pi embedded computers along with an Ubuntu machine with a rich set of features and interfaces, capable of running virtualized services. This thesis evaluates the solution based on practical implementation. In addition, the thesis also discusses the benefits and drawbacks of the system with respect to the empirical solution. The output of the thesis shows that the virtualized microservices could be efficiently utilized at the local and resource constrained IoT using Dockers. This validates that the approach taken in this thesis is feasible for providing such services and functionalities to the micro and nanoservice architecture. Finally, this thesis proposes numerous improvements for future iterations

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures
    corecore